A recent article in the NY Times talked about how the US State Department is behind a project to build up mesh networks that can be used in countries with authoritarian governments, helping citizens of those places access an internet that is often greatly limited. This isn't actually new. In fact, three years ago we wrote about another NY Times article about the State Department funding these kinds of projects. Nor is the specific project in the latest NYT article new. A few months back, we had covered an important milestone with Commotion, the mesh networking project coming out of New America Foundation's Open Technology Institute (OTI).
But the latest NYT article is especially odd, not because it repeats old news, but because it tries to build a narrative that Commotion and other such projects funded by the State Department are somehow awkward because they could be used to fight back against government surveillance, such as those of the NSA. The problem is that the issues are unrelated, and nothing in mesh networking deals with stopping surveillance. As Ed Felten notes, the Times reporters appear to be confusing things greatly:
There’s only one problem: mesh networks don’t do much to protect you from surveillance. They’re useful, but not for that purpose.
A mesh network is constructed from a bunch of nodes that connect to each other opportunistically and figure out how to forward packets of data among themselves. This is in constrast to the hub-and-spoke model common on most networks.
The big advantage of mesh networks is availability: set up nodes wherever you can, and they’ll find other nearby nodes and self-organize to route data. It’s not always the most efficient way to move data, but it is resilient and can provide working connectivity in difficult places and conditions. This alone makes mesh networks worth pursing.
But what mesh networks don’t do is protect your privacy. As soon as an adversary connects to your network, or your network links up to the Internet, you’re dealing with the same security and privacy problems you would have had with an ordinary connection.
The whole point of Commotion and other mesh networks is availability, not privacy. The target use is for places where governments are seeking to shut down internet access, not surveil on them. Yes, there is a case where if you could set up a mesh network that then routed around government surveillance points you could circumvent some level of surveillance, but the networks themselves are not designed to be surveillance proof. In fact, back in January when we wrote about Commotion, we pointed out directly that the folks behind the project themselves are pretty explicit that Commotion is not about hiding your identity or preventing monitoring of internet traffic.
Could a mesh network also be combined with stronger privacy and security protections? Yes, but that's different than just assuming that mesh networking takes on that problem by itself. It doesn't -- and it's misleading for the NYT to suggest otherwise.
The United States spends more than $50 billion a year on spying and intelligence, while the folks who build important defense software — in this case a program called OpenSSL that ensures that your connection to a website is encrypted — are four core programmers, only one of whom calls it a full-time job.
In a typical year, the foundation that supports OpenSSL receives just $2,000 in donations. The programmers have to rely on consulting gigs to pay for their work. "There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work," says Steve Marquess, who raises money for the project.
Is it any wonder that this Heartbleed bug slipped through the cracks?
Dan Kaminsky, a security researcher who saved the Internet from a similarly fundamental flaw back in 2008, says that Heartbleed shows that it's time to get "serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code."
The Obama Administration has said it is doing just that with its national cybersecurity initiative, which establishes guidelines for strengthening the defense of our technological infrastructure — but it does not provide funding for the implementation of those guidelines.
Instead, the National Security Agency, which has responsibility to protect U.S. infrastructure, has worked to weaken encryption standards. And so private websites — such as Facebook and Google, which were affected by Heartbleed — often use open-source tools such as OpenSSL, where the code is publicly available and can be verified to be free of NSA backdoors.
The federal government spent at least $65 billion between 2006 and 2012 to secure its own networks, according to a February report from the Senate Homeland Security and Government Affairs Committee. And many critical parts of the private sector — such as nuclear reactors and banking — follow sector-specific cybersecurity regulations.
But private industry has also failed to fund its critical tools. As cryptographer Matthew Green says, "Maybe in the midst of patching their servers, some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job."
In the meantime, the rest of us are left with the unfortunate job of changing all our passwords, which may have been stolen from websites that were using the broken encryption standard. It's unclear whether the bug was exploited by criminals or intelligence agencies. (The NSA says it didn't know about it.)
It's worth noting, however, that the risk of your passwords being stolen is still lower than the risk of your passwords being hacked from a website that failedtoprotect them properly. Criminals have so many ways to obtain your information these days — by sending you a fake email from your bank or hacking into a retailer's unguarded database — that it's unclear how many would have gone through the trouble of exploiting this encryption flaw.
The problem is that if your passwords were hacked by the Heartbleed bug, the hack would leave no trace. And so, unfortunately, it's still a good idea to assume that your passwords might have been stolen.
So, you need to change them. If you're like me, you have way too many passwords. So I suggest starting with the most important ones — your email passwords. Anyone who gains control of your email can click "forgot password" on your other accounts and get a new password emailed to them. As a result, email passwords are the key to the rest of your accounts. After email, I'd suggest changing banking and social media account passwords.
But before you change your passwords, you need to check if the website has patched their site. You can test whether a site has been patched by typing the URL here. (Look for the green highlighted " Now Safe" result.)
If the site has been patched, then change your password. If the site has not been patched, wait until it has been patched before you change your password.
A reminder about how to make passwords: Forget all the password advice you've been given about using symbols and not writing down your passwords. There are only two things that matter: Don't reuse passwords across websites and the longer the password, the better.
I suggest using password management software, such as 1Password or LastPass, to generate the vast majority of your passwords. And for email, banking and your password to your password manager, I suggest a method of picking random words from the Dictionary called Diceware. If that seems too hard, just make your password super long — at least 30 or 40 characters long, if possible.
Well, this is interesting. I naturally assumed that when the various researchers first discovered Heartbleed, they told the government about it. While I know that some people think this is crazy, it is fairly standard practice, especially for a bug as big and as problematic as Heartbleed. However, the National Journal has an article suggesting that Google deliberately chose not to tell the government about Heartbleed. No official reason is given, but assuming this is true, it wouldn't be difficult to understand why. Google employees (especially on the security side) still seem absolutely furious about the NSA hacking into Google's data centers, and various other privacy violations. When a National Journal reporter contacted Google about the issue, note the response:
Asked whether Google discussed Heartbleed with the government, a company spokeswoman said only that the "security of our users' information is a top priority" and that Google users do not need to change their passwords.
Here's the thing: if the NSA hadn't become so focused on hacking everyone, it wouldn't be in this position. The NSA's dual offense and defense role has poisoned the waters, such that no company can or should trust the government to do the responsible thing and help secure vulnerable systems any more. And for that, the government only has itself to blame.
It's not too surprising that one of the first questions many people have been asking about the Heartbleed vulnerability in OpenSSL is whether or not it was a backdoor placed there by intelligence agencies (or other malicious parties). And, even if that wasn't the case, a separate question is whether or not intelligence agencies found the bug earlier and have been exploiting it. So far, the evidence is inconclusive at best -- and part of the problem is that, in many cases, it would be impossible to go back and figure it out. The guy who introduced the flaw, Robin Seggelmann, seems rather embarrassed about the whole thing but insists it was an honest mistake:
Mr Seggelmann, of Munster in Germany, said the bug which introduced the flaw was "unfortunately" missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.
"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," he said.
"In one of the new features, unfortunately, I missed validating a variable containing a length."
After he submitted the code, a reviewer "apparently also didn’t notice the missing validation", Mr Seggelmann said, "so the error made its way from the development branch into the released version." Logs show that reviewer was Dr Stephen Henson.
Mr Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe".
Later in that same interview, he insists he has no association with intelligence agencies, and also notes that it is "entirely possible" that intelligence agencies had discovered the bug and had made use of it.
Another oddity in all of this is that, even though the flaw itself was introduced two years ago, two separate individuals appear to have discovered it on the exact same day. Vocativ, which has a great story giving the behind the scenes on the discovery by Codenomicon, mentions the following in passing:
Unbeknownst to Chartier, a little-known security researcher at Google, Neel Mehta, had discovered and reported the OpenSSL bug on the same day. Considering the bug had actually existed since March 2012, the odds of the two research teams, working independently, finding and reporting the bug at the same time was highly surprising.
Highly surprising. But not necessarily indicative of anything. It could be a crazy coincidence. Kim Zetter, over at Wired explores the "did the NSA know about Heartbleed" angle, and points out accurately that while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts. The whole issue with Heartbleed is that it "bleeds" chunks of memory that are on the server. It's effectively a giant crapshoot as to what you get when you exploit it. Yes, it bleeds all sorts of things: including usernames, passwords, private keys, credit card numbers and the like -- but you never quite know what you'll get, which makes it potentially less useful for intelligence agencies. As that Wired article notes, at best, using the Heartbleed exploit would be "very inefficient" for the NSA.
But that doesn't mean there aren't reasons to be fairly concerned. Peter Eckersley, over at EFF, has tracked down at least one potentially scary example that may very well be someone exploiting Heartbleed back in November of last year. It's not definitive, but it is worth exploring further.
The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.
Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 22.214.171.124 and 126.96.36.199. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.
EFF is asking people to try to replicate Koeman's findings, while also looking for any other possible evidence of Heartbleed exploits being used in the wild. As it stands now, there doesn't seem to be any conclusive evidence that it was used -- but that doesn't mean it wasn't being used. After all, it's been known that the NSA has a specific program designed to subvert SSL, so there's a decent chance that someone in the NSA could have discovered this bug earlier, and rather than doing its job and helping to protect the security of the internet, chose to use it to its own advantage first.
The USTR seems to have a worrying need to blame other countries. Alongside the infamous Special 301 Report which puts a selection of nations on the naughty step because of their failure to bend to the will of the US copyright industries, there's the less well-known Section 1377 Review
, which considers "Compliance with Telecommunications Trade Agreements." Here's some information about the latest one (pdf):
The Section 1377 Review ("Review") is based on public comments filed by interested parties and information developed from ongoing contact with industry, private sector, and foreign government representatives in various countries. This year USTR received four comments and two reply comments from the private sector, and one comment from a foreign government.
The ability to send, access and manage data remotely across borders is integral to global services, including converged and hybrid services such as cloud services. However, the tremendous increase in cross-border data flows has raised concerns on the part of many governments. Given that cross-border services trade is, at its essence, the exchange of data, unnecessary restrictions on data flows have the effect of creating barriers to trade in services.
That seems to be reflected in the following section of the USTR's review:
Recent proposals from countries within the European Union to create a Europe-only electronic network (dubbed a "Schengen cloud" by advocates) or to create national-only electronic networks could potentially lead to effective exclusion or discrimination against foreign service suppliers that are directly offering network services, or dependent on them.
Deutsche Telekom AG (DTAG), Germany's biggest phone company, is publicly advocating for EU-wide statutory requirements that electronic transmissions between EU residents stay within the territory of the EU, in the name of stronger privacy protection. Specifically, DTAG has called for statutory requirements that all data generated within the EU not be unnecessarily routed outside of the EU; and has called for revocation of the U.S.-EU "Safe Harbor" Framework, which has provided a practical mechanism for both U.S companies and their business partners in Europe to export data to the United States, while adhering to EU privacy requirements.
Of course, Deutsche Telekom is not the only one calling for Safe Harbor to be revoked: the European Parliament's inquiry into the mass surveillance of EU citizens has also proposed that, along with a complete rejection of TAFTA/TTIP unless it respects the rights of Europeans. Strangely, the USTR doesn't mention that fact in its complaint, but goes on to say:
The United States and the EU share common interests in protecting their citizens' privacy, but the draconian approach proposed by DTAG and others appears to be a means of providing protectionist advantage to EU-based ICT suppliers.
You've got to love the idea that too much privacy protection is "draconian". The USTR continues to tiptoe around the real reason that not just Deutsche Telekom but even Germany's Chancellor, Angela Merkel, are both keen on the idea of an EU-only cloud:
Given the breath of legitimate services that rely on geographically-dispersed data processing and storage, a requirement to route all traffic
involving EU consumers within Europe, would decrease efficiency and stifle innovation. For example, a supplier may transmit, store, and process its data outside the EU more efficiently, depending on the location of its data centers. An innovative supplier from outside of Europe may refrain from offering its services in the EU because it may find EU-based storage and processing requirements infeasible for nascent services launched from outside of Europe.
The USTR saves what it obviously sees as its killer punch for last:
Furthermore, any mandatory intra-EU routing may raise questions with respect to compliance with the EU's trade obligations with respect to Internet-enabled services. Accordingly, USTR will be carefully monitoring the development of any such proposals.
Got that, Europeans? If you dare to try to protect yourselves by creating a slightly more secure EU-only cloud in response to the NSA breaking into everything and anything, you may find yourself referred to the World Trade Organization or something....
It's interesting that the USTR brings up this issue -- doubtless a reflection of the huge direct losses that revelations about massive surveillance on Europeans and others are likely to cause the US computing industry. But trying to paint itself as the wronged party here is not going to endear the USTR to European politicians. At a time when Safe Harbor and even the TAFTA/TTIP negotiations are being called into question in the EU, such an aggressive and insulting stance seems a very stupid move.
Almost exactly a decade ago (man, time flies...), we first discussed the question of whether or not it should be against the law to get hacked. The FTC had gone after Tower Records (remember them?) for its weak data security practices. That resulted in a series of questions about where the liability should fall. Many people, quite reasonably, say that there should be incentives for companies to better manage data security and (especially) to protect their users. But, it's also true that sooner or later, if you're a target, you're going to get hacked. Ten years later and this is still an issue. The FTC went after Wyndham hotels for its egregiously bad data security (which made it easy for hackers to get hotel guests' information, including credit cards), but Wyndham fought back, saying the FTC had no authority over such matters, especially without having first issued specific rules.
Again, Wyndham's security here was egregiously bad. It didn't encrypt payment data, and also used default logins and passwords for its systems. So there's an argument here that some kind of line can be drawn between purely negligent behavior, such as Wyndham's (lack of) data security, and companies who actually do follow some rather basic security practices, and yet still fall prey to hacks. What makes things tricky is that pretty large gray area in between the two extremes.
Yesterday, we wrote about just how terrible the Heartbleed bug in OpenSSL is. It's been generating plenty of discussion, with folks like Bruce Schneier calling it "catastrophic" and saying that "on the scale of 1 to 10, this is an 11." It's a pretty big deal. So you'd think that everyone would be scrambling to help plug the vulnerability as painlessly as possible. And most companies have been doing that. But one -- StartCom -- apparently sees this as an opportunity to rake in cash and to screw over those most vulnerable.
StartCom is a free SSL Cert authority, and on the company's website, it claims it offers this service for free "because we believe in the right to protect and secure information between two entities without discrimination of race, origin and financial capabilities." Except, that's not quite how things are playing out in reality. As is being actively discussed over at HackerNews and via the StartSSL Twitter fee, the company is trying to charge people to revoke the vulnerable certs. Update: And, yes, they're even charging those who are on their premium paid service tiers as well -- and often charging exorbitant rates.
While the company has generally charged for revoking certs, many people pointed out that with a vulnerability of this magnitude, that's both ridiculous and dangerous. However, the company doesn't seem to care.
It's upon the subscriber to take appropriate action since the
certificate authority can't enforce which software to use. The terms of service and related fees will not change due to that.
When it was pointed out to the company how serious a vulnerability issue the company started to get snotty with its own uses:
We do understand the situation very well, thanks.... This is not our fault as well. We do not see any reason to provide this
paid service for free. We have enough other free services already if you
didn't mentioned it.
People began challenging the company on Twitter, and it's taken that same snotty "we don't give a fuck" attitude to them as well:
Yes, this is part of StartCom's business model. Free certs, pay to revoke (Update: but that doesn't explain why they're doing this for paying customers too...). But this is clearly a case where that model should be suspended to keep the internet safe. The amount of ill-will this move is generating is pretty clear. Furthermore, it highlights what a bullshit claim it is that its goal is to better protect communications. If that were true, it would allow emergency revocations for an issue like Heartbleed.
There have been a bunch of stories going around about how 5-year-old Kristoffer Von Hassel figured out a way to hack the Xbox Live password system. Kristoffer's parents noticed that their son was logging into his father's account and playing games he wasn't supposed to be playing. They asked him how he was doing it and he showed them:
Just after Christmas, Kristoffer's parents noticed he was logging into his father's Xbox Live account and playing games he wasn't supposed to be.
“I got nervous. I thought he was going to find out,” said Kristoffer.
In video shot soon after, his father, Robert Davies, is heard asking Kristoffer how he was doing it.
A suddenly excited Kristoffer showed Dad that when he typed in a wrong password for his father’s account, it clicked to a password verification screen. By typing in space keys, then hitting enter, Kristoffer was able to get in through a back door.
Kristoffer's father, Robert Davies, works in computer security (which, frankly, makes me a little skeptical that Kristoffer really made this discovery), and submitted the bug to Microsoft, who not only quickly fixed it, but also listed Kristoffer on their March "acknowledgements" for security researchers who helped them find bugs and vulnerabilities.
Of course, the flip side to this story is how we've seen the CFAA used in the past to go after people discovering similar flaws. Compare the story of Kristoffer to the story of Andrew "weev" Auernheimer. Kristoffer clearly exceeded authorized access to the Xbox Live system in order to obtain something of value (perhaps he gets off because the "something" is not worth more than $5,000, but still...). Of course, weev is an obnoxious internet troll, and Kristoffer is a cute 5-year-old. I guess that's what's meant by "prosecutorial discretion."
Last December, Reuters broke the news that RSA had received $10 million from the NSA to push a weakened crypto standard as the default. This resulted in an incredible amount of backlash against RSA, resulting in many security researchers pulling out of the RSA's conference (which itself was met by a protest conference).
Security industry pioneer RSA adopted not just one but two encryption tools developed by the U.S. National Security Agency, greatly increasing the spy agency's ability to eavesdrop on some Internet communications, according to a team of academic researchers.
Reuters reported in December that the NSA had paid RSA $10 million to make a now-discredited cryptography system the default in software used by a wide range of Internet and computer security programs. The system, called Dual Elliptic Curve, was a random number generator, but it had a deliberate flaw - or "back door" - that allowed the NSA to crack the encryption.
A group of professors from Johns Hopkins, the University of Wisconsin, the University of Illinois and elsewhere now say they have discovered that a second NSA tool exacerbated the RSA software's vulnerability.
The professors found that the tool, known as the "Extended Random" extension for secure websites, could help crack a version of RSA's Dual Elliptic Curve software tens of thousands of times faster, according to an advance copy of their research shared with Reuters.
As Reuters notes, Extended Random has not been widely adopted (and now won't be), so the real story here is how the NSA undermines companies (and their aims) under the name of "advising on protection."
Rather belatedly, RSA officials are developing a sense of skepticism towards the NSA's motives.
"We could have been more skeptical of NSA's intentions," RSA Chief Technologist Sam Curry told Reuters. "We trusted them because they are charged with security for the U.S. government and U.S. critical infrastructure."
As has been shown numerous times over the last several years, the government would rather make the connected world less secure -- by stockpiling exploits and preventing holes from being patched -- in the name of "security." There's more than one kind of security, and the definition that works for most normal people runs contrary to the NSA's desire to exploit and collect everything it can.
The NSA has refused to comment on the story and the RSA, for its part, has not disputed what researchers have uncovered. Dual Elliptic Curve is the NSA's $10 million baby, and the addition of Extended Random does nothing more than make the next set of random numbers easier to predict.
Johns Hopkins Professor Matthew Green said it was hard to take the official explanation for Extended Random at face value, especially since it appeared soon after Dual Elliptic Curve's acceptance as a U.S. standard.
"If using Dual Elliptic Curve is like playing with matches, then adding Extended Random is like dousing yourself with gasoline," Green said…
The academic researchers said it took about an hour to crack a free version of BSafe for Java using about $40,000 worth of computer equipment. It would have been 65,000 times faster in versions using Extended Random, dropping the time needed to seconds, according to Stephen Checkoway of Johns Hopkins.
This is what happens when you allow the NSA to not only play with the toys, but to also design them. "Security," in terms of the RSA's chosen standard, is now nothing more than a buzzword appended to its product line. The company learned far too late that the intelligence agency has little need for solid encryption, viewing it as an obstacle to be surmounted rather than a defensive tool that might make computing more secure -- for everybody.
The agency wants it all and it wants to gather it with the least amount of effort possible. While it may have little desire to turn its weapons on Americans ("incidental collections" will still continue, of course…), it has exactly zero compelling legal reasons not to weaponize crippled encryption against the rest of the world. RSA's credulousness (and perhaps $10 million) apparently silenced its better judgement, and now the connected world is open not only to the NSA's exploits, but anyone else with the desire to open the agency's backdoors.
The Washington Post has an article about how, even though it's long been known that Microsoft is sunsetting support for Windows XP on April 8th of this year, about 10% of US government computers still run XP. Because of this, the article declares that government computers running Windows XP will be vulnerable to hackers after April 8. While technically true (they will be vulnerable after April 8th) what would be a hell of a lot more true is to actually note that they're extremely vulnerable to hackers today and have been just as vulnerable for years. Microsoft sunsetting its support doesn't change that one way or the other.
What's incredible, is that for all the FUD being spread around by government officials about "cyberwar," "cyberattacks" and "cybersecurity," you'd think that getting the government's own house in order would be more of a priority. Outgoing NSA boss General Keith Alexander keeps claiming that he needs more access to private networks to protect them from foreign hackers (yeah, right), and yet this report notes that all sorts of classified government material is sitting on Windows XP computers.
That includes thousands of computers on classified military and diplomatic networks, U.S. officials said. Such networks have stronger defenses generally but hold more sensitive material, raising the stakes for breaches if they occur.
Given how sophisticated the NSA's abilities are to infiltrate just about any computer out there, as revealed by multiple documents leaked by Ed Snowden, you'd think that the NSA would be a bit more proactive in helping to shore up our own defenses by doing things like no longer using Windows XP.