The latest from the Guardian out of the Ed Snowden leaks shows that the NSA and GCHQ have been trying desperately to target Tor, even though Tor is largely funded by the US government. The good news is that they basically haven't been able to attack the underlying Tor network, but rather rely on exploits elsewhere, such as within Firefox to try to target certain individuals.
Top-secret NSA documents, disclosed by whistleblower Edward Snowden, reveal that the agency's current successes against Tor rely on identifying users and then attacking vulnerable software on their computers. One technique developed by the agency targeted the Firefox web browser used with Tor, giving the agency full control over targets' computers, including access to files, all keystrokes and all online activity.
But the documents suggest that the fundamental security of the Tor service remains intact. One top-secret presentation, titled 'Tor Stinks', states: "We will never be able to de-anonymize all Tor users all the time." It continues: "With manual analysis we can de-anonymize a very small fraction of Tor users," and says the agency has had "no success de-anonymizing a user in response" to a specific request.
Another top-secret presentation calls Tor "the king of high-secure, low-latency internet anonymity".
In response to all of this the NSA put out one of its typically bland and empty statements about how what it does is "authorized by law" and it should be no surprise that it's seeking information on bad people.
We wrote last week about an appeals court's technologically illiterate ruling that WiFi isn't a radio communication, and therefore picking up unencrypted WiFi data, even though it's broadcast for anyone to access, could be a violation of wiretapping laws. This seemed ridiculous for a variety of reasons, including the fact that part of the reasoning is that radio is supposedly mostly "auditory" (even though it's not).
If you're a security researcher in the Ninth Circuit (which covers most of the West Coast) who wants to capture unencrypted Wi-Fi packets as part of your research, you better call a lawyer first (and we can help you with that). The Wiretap Act imposes both civil and serious criminal penalties for violations and there is a real risk that researchers who intentionally capture payload data transmitted over unencrypted Wi-Fi—even if they don't read the actual communications —may be found in violation of the law. Given the concerns about over-criminalization and overcharging, prosecutors now have another felony charge in their arsenal.
There's a fairly big risk here that this interpretation of the law is going to create tremendous chilling effects on research.
Of course, there is a flip side. In theory, this might also mean that police can't scoop up WiFi signals either:
On the other hand, the decision also provides a strong argument that the feds and other law enforcement agencies that want to spy on data transmitted over unencrypted Wi-Fi will need to get a wiretap order to do so. We've seen the government use a device called a "moocherhunter" without a search warrant to read Wi-Fi signals to figure out who's connecting to a particular wireless router. This decision suggests that to the extent the government uses a device like this (or even a "stingray" to the extent it can capture Wi-Fi signals) to capture payload data —even if just to determine a person's location—they'll need a wiretap order to do so. That's good news since wiretap orders are harder to get than a search warrant.
Still we've seen courts give much greater leverage to law enforcement scooping up communications, so this benefit might not actually be real. The risk and the chilling effects to security researchers, however, is very real. Having seen how often security researchers have been threatened and/or arrested for their research, giving law enforcement another bogus thing to use against them is a huge problem.
It appears that Apple is the latest company to take a "kill the messenger" approach to security vulnerabilities. Hours after security researcher Charlie Miller found a huge vulnerability in iOS, which would allow malicious software to be installed on iOS devices, Apple responded by taking away his developer's license.
The obvious implication: don't search for security vulnerabilities in Apple products, and if you do find them, keep them to yourself.
First off, here's Miller explaining the security hole:
To be fair, Miller did get Apple to approve an app that he was using to demo the security flaw. However, kicking him out of its developer program is exactly the wrong response. Miller, clearly, was not looking to use the code maliciously -- just demoing a problem with their system. In other words, he was helping Apple become more secure, and they punished him for it. The message seems to be that Apple doesn't want you to help make their system more secure. Instead, they'd rather let the malicious hackers run wild. As Miller noted to Andy Greenberg at Forbes (the link above):
“I’m mad,” he says. “I report bugs to them all the time. Being part of the developer program helps me do that. They’re hurting themselves, and making my life harder.”
And, no, this is not a case where he went public first either. He told Apple about this particular bug back on October 14th. Either way, this seems like a really brain-dead move by Apple. It's only going to make Apple's systems less secure when it punishes the folks who tell it about security vulnerabilities.
One of the general tenets of white hat security hackers is that when they find a vulnerability they alert the company first and allow them to fix things before they reveal the details. But what if it's impossible to reach anyone at the company? That Anonymous Coward points us to a recent case of someone discovering a serious zero-day vulnerability at American Express... and not only not not being able to find anyone to contact, but also being told that the company would pay more attention to him if he were a cardholer:
To my great surprise American Express doesn’t allow anybody to contact them. Instead, you’re sent through their ten-year-old copyright noticed website’s first line support jungle to be attacked with questions ensuring that you’re a paying customer. If you’re not then you might as well not bother, unless you feel like speaking technical advanced 0day vulnerabilities with incompetent support personnel either through Twitter direct messages or phone. They will leave you no option of contacting them in a manner that circumvents any theoretical possibility they may have of boosting sales numbers.
The only acceptable contact methods that I found on their site were telephone, fax or physical mail to some typoed country called Swerige. I figured none of them were suitable for 0day reports and decided to turn to Twitter and ask for an e-mail address or some other modern protocol.
As TAC mentioned in his submission, perhaps black hat hackers are merely white hats who got tired of the muzak on hold...
Modplan alerts us to a developer at Wolfire games who wrote a blog post claiming that DRM can be "effective," and giving the example of StarForce's DRM on Splinter Cell 3: Chaos Theory, which supposedly took over a year to crack. But, for this to happen, there were all sorts of problems and even lawsuit threats over people reporting on those problems:
StarForce 3.0 used a plethora of controversial methods to achieve this, most notably, it secretly installed mandatory device drivers. This obviously was highly controversial and there were many reports of new security vulnerabilities, performance degredation, incompatibilities, system instability, and other issues. As an aside, StarForce actually threatened to sue BoingBoing and CNET for reporting on these issues.
Massive consumer issues aside, it worked.
Wait, what? You can't just toss aside those massive consumer issues. "Security vulnerabilities, performance degradation, incompatibilities, system instability, and other issues," does not sound like it "worked" at all. It sounds like the exact opposite. It pissed off and potentially put at risk tons of paying customers. That's not DRM "working" -- though, that is how DRM works. Anyone who reads about "security vulnerabilities, performance degradation, incompatibilities, system instability, and other issues," and thinks that's an example of a system to be emulated, is not someone who you should ever trust to do business with. I'd consider that fair warning to stay away from Wolfire games. As pointed out in the comments, we may have been too quick to judge on this one. Wolfire makes it clear they don't believe that DRM makes sense. The folks from Wolfire also reached out and pointed out that this post was actually a small "correction" to an anti-DRM piece written earlier. As for DRM, Wolfire makes it clear: "We have never used DRM, we hate DRM, and we never will use DRM!" On top of that, they "encourage all other game developers to remove DRM." My apologies for jumping to conclusions on that one. Ok, now go support Wolfire Games...
As pretty much anyone in computer security recognizes, any bit of "secure" computing is only secure for a limited period of time. Eventually, the security will be cracked. Yet, we still keep hearing about expectations for some new technologies to solve all our security problems. For example, we've been hearing for years about the wonders of "trusted computing," which basically gets mocked every time some company tries to roll it out (which is why it's gone through five or six name changes over the years). The latest news is that Intel's implementation of a trusted computing offering, called Trusted Execution Technology, has security vulnerabilities that allow it to be circumvented. In other words, it's not trustworthy, nor secure. Of course, it's not widely used, either, so it's not a big deal. But, once again, there is no magic bullet for security that solves all security problems.
You may recall earlier this month that a judge in New Jersey barred some researchers from releasing their report into the security vulnerabilities found in e-voting machines from Sequoia that were being used in the state. Sequoia had fought hard to stop the research from even being done in the first place, let alone released, even threatening the researchers with lawsuits. Now, one of the researchers who did the research, Andrew Appel, has released a long report detailing a ridiculous number of security problems with Sequoia's machines. To be honest, it's not clear from the blog post about the report if this is the same one that's being suppressed or not, but it's pretty damning. Because this is an important issue that doesn't necessarily get enough attention, I'm reposting Appel's executive summary of just how screwed up these machines are:
I. The AVC Advantage 9.00 is easily "hacked" by the installation of fraudulent firmware. This is done by prying just one ROM chip from its socket and pushing a new one in, or by replacement of the Z80 processor chip. We have demonstrated that this "hack" takes just 7 minutes to perform.
The fraudulent firmware can steal votes during an election, just as its criminal designer programs it to do. The fraud cannot practically be detected. There is no paper audit trail on this machine; all electronic records of the votes are under control of the firmware, which can manipulate them all simultaneously.
II. Without even touching a single AVC Advantage, an attacker can install fraudulent firmware into many AVC Advantage machines by viral propagation through audio-ballot cartridges. The virus can steal the votes of blind voters, can cause AVC Advantages in targeted precincts to fail to operate; or can cause WinEDS software to tally votes inaccurately. (WinEDS is the program, sold by Sequoia, that each County's Board of Elections uses to add up votes from all the different precincts.)
III. Design flaws in the user interface of the AVC Advantage disenfranchise voters, or violate voter privacy, by causing votes not to be counted, and by allowing pollworkers to commit fraud.
IV. AVC Advantage Results Cartridges can be easily manipulated to change votes, after the polls are closed but before results from different precincts are cumulated together.
V. Sequoia's sloppy software practices can lead to error and insecurity. Wyle's Independent Testing Authority (ITA) reports are not rigorous, and are inadequate to detect security vulnerabilities. Programming errors that slip through these processes can miscount votes and permit fraud.
VI. Anomalies noticed by County Clerks in the New Jersey 2008 Presidential Primary were caused by two different programming errors on the part of Sequoia, and had the effect of disenfranchising voters.
VII. The AVC Advantage has been produced in many versions. The fact that one version may have been examined for certification does not give grounds for confidence in the security and accuracy of a different version. New Jersey should not use any version of the AVC Advantage that it has not actually examined with the assistance of skilled computer-security experts.
VIII. The AVC Advantage is too insecure to use in New Jersey. New Jersey should immediately implement the 2005 law passed by the Legislature, requiring an individual voter-verified record of each vote cast, by adopting precinct-count optical-scan voting equipment.
from the it's-not-like-we've-got-computers-that-can-count dept
You know, the one thing that computers are supposed to be good at is counting things accurately. So why is it so hard to do so when it comes to counting votes? We recently wrote about the case in Washington DC's primaries where election officials were struggling to figure out the source of an awful lot of votes for a non-existent write-in candidate. Sequoia, the makers of the e-voting machines in question, were quick to deny any and all responsibility with the hilariously "thou dost protest too much" statement: "There's absolutely no problem with the machines in the polling places. No. No."
Either way, it appears that officials in DC still can't properly add up the votes properly, and are noting that 13 separate races all show the exact same number of overvotes: 1,542, though no one can explain why. Sequoia continues to stand by its original statement that the problem must be one of human error -- though it fails to explain how simple human error would create 1,542 extra votes in 13 entirely separate races -- and why it didn't design a system that would prevent the ability for "human error" to create such votes.
Last week, we wrote about yet another problem with Sequoia e-voting equipment where the company was vehemently denying the problem was with the machines, even saying: "There's absolutely no problem with the machines in the polling places. No. No." Of course, this came right after a report revealing how easy it was to hack their machines, as well as numerous other problems with Sequoia machines. Yet the company consistently employs the same exact strategy: it couldn't possibly be the fault of the machines.
You may recall the story earlier this month about the Sequoia optical scanning machines in Palm Beach County that supposedly couldn't reach the same vote tally if different counting machines were used. At least that was the original claim -- but it was later changed when election officials admitted they had simply misplaced some ballots. Well, the latest report claims that the recount is now not showing lost ballots -- it's showing too many ballots. Fantastic. Election officials think they've traced the problem to the fact that some votes on Sequoia's e-voting machine cartridges weren't properly transferred, which kicks off Sequoia's standard PR response:
The company's representative, Phil Foster says "the cartridge is fine. Why it didn't read I do not know," suggesting another human error made on election night.
You know, when you keep saying that, and the problems keep occurring, at some point, people are going to stop believing you. Even if the problem really is human error every one of these times, people might begin to wonder why you don't design your systems to avoid such human errors.
It seems like every few months, well respected security researchers come out with yet another report about just how insecure various e-voting machines are. The amazing thing is how hard the various e-voting companies have fought against allowing these researchers to look at their machines, always insisting that the federal certification process (the one that's were later shown to have not done a very good job testing the machines) was fine. Of course, even the Government Accountability Office has admitted that the federal certification process sucks.
One of the complaints that the e-voting firms have had about having independent security researchers testing the machines is that those tests are not in real world conditions. In fact, we had a commenter from one of the e-voting companies who insisted that these independent tests were useless because:
The point people often miss, which is left off of the conspiracy blogs, is that all of these 'hacking' attempts that are requested are made to do so in some sort of vacuum. In some obscure room where a gang of hackers get together and try to penetrate the system with unlimited resources. In any election, paper or fully electronic, there are procedural and security measures taken that complement and supplement the security features of the system itself. This is in addition to internal and system-independent, pre- and post-election audit features.
That's really rather meaningless, because if it were true, then that info would also come out in those independent research reports. However, even that comment turns out to be untrue. As a few folks have submitted, some security researchers at UCSB have demonstrated not just how insecure Sequoia's e-voting systems are, but they've shown how easy it is to hack an election with a pair of videos that you can watch right here (if you're in the RSS feed, click through to see them):
What this shows is that the hack that the researchers shows demolishes that comment from the insider. All it required was for those wishing to change the results of the election to drop a USB key into the pile of USB keys used to set the system up. All of the security measures that the insider talks about are then bypassed with ease. The video shows it getting buy the procedural security measures, as well as the pre- and post-election audit features.
The video also shows why paper ballots are hardly a solution, as the second video shows how the malware included in the software can be set to void out legitimate votes and replace them with fake votes, in a variety of different scenarios, almost all of which are likely to go undetected. This is a hugely damning report -- and it comes against a company that has fought so hard against having its machines tested by independent security experts. While some may say that this shows why they didn't want it tested -- it should concern anyone who believes in free and fair democratic elections that we're using such insecure voting machines.