from the it's-not-like-we've-got-computers-that-can-count dept
You know, the one thing that computers are supposed to be good at is counting things accurately. So why is it so hard to do so when it comes to counting votes? We recently wrote about the case in Washington DC's primaries where election officials were struggling to figure out the source of an awful lot of votes for a non-existent write-in candidate. Sequoia, the makers of the e-voting machines in question, were quick to deny any and all responsibility with the hilariously "thou dost protest too much" statement: "There's absolutely no problem with the machines in the polling places. No. No."
Either way, it appears that officials in DC still can't properly add up the votes properly, and are noting that 13 separate races all show the exact same number of overvotes: 1,542, though no one can explain why. Sequoia continues to stand by its original statement that the problem must be one of human error -- though it fails to explain how simple human error would create 1,542 extra votes in 13 entirely separate races -- and why it didn't design a system that would prevent the ability for "human error" to create such votes.
Last week, we wrote about yet another problem with Sequoia e-voting equipment where the company was vehemently denying the problem was with the machines, even saying: "There's absolutely no problem with the machines in the polling places. No. No." Of course, this came right after a report revealing how easy it was to hack their machines, as well as numerous other problems with Sequoia machines. Yet the company consistently employs the same exact strategy: it couldn't possibly be the fault of the machines.
You may recall the story earlier this month about the Sequoia optical scanning machines in Palm Beach County that supposedly couldn't reach the same vote tally if different counting machines were used. At least that was the original claim -- but it was later changed when election officials admitted they had simply misplaced some ballots. Well, the latest report claims that the recount is now not showing lost ballots -- it's showing too many ballots. Fantastic. Election officials think they've traced the problem to the fact that some votes on Sequoia's e-voting machine cartridges weren't properly transferred, which kicks off Sequoia's standard PR response:
The company's representative, Phil Foster says "the cartridge is fine. Why it didn't read I do not know," suggesting another human error made on election night.
You know, when you keep saying that, and the problems keep occurring, at some point, people are going to stop believing you. Even if the problem really is human error every one of these times, people might begin to wonder why you don't design your systems to avoid such human errors.
It seems like every few months, well respected security researchers come out with yet another report about just how insecure various e-voting machines are. The amazing thing is how hard the various e-voting companies have fought against allowing these researchers to look at their machines, always insisting that the federal certification process (the one that's were later shown to have not done a very good job testing the machines) was fine. Of course, even the Government Accountability Office has admitted that the federal certification process sucks.
One of the complaints that the e-voting firms have had about having independent security researchers testing the machines is that those tests are not in real world conditions. In fact, we had a commenter from one of the e-voting companies who insisted that these independent tests were useless because:
The point people often miss, which is left off of the conspiracy blogs, is that all of these 'hacking' attempts that are requested are made to do so in some sort of vacuum. In some obscure room where a gang of hackers get together and try to penetrate the system with unlimited resources. In any election, paper or fully electronic, there are procedural and security measures taken that complement and supplement the security features of the system itself. This is in addition to internal and system-independent, pre- and post-election audit features.
That's really rather meaningless, because if it were true, then that info would also come out in those independent research reports. However, even that comment turns out to be untrue. As a few folks have submitted, some security researchers at UCSB have demonstrated not just how insecure Sequoia's e-voting systems are, but they've shown how easy it is to hack an election with a pair of videos that you can watch right here (if you're in the RSS feed, click through to see them):
What this shows is that the hack that the researchers shows demolishes that comment from the insider. All it required was for those wishing to change the results of the election to drop a USB key into the pile of USB keys used to set the system up. All of the security measures that the insider talks about are then bypassed with ease. The video shows it getting buy the procedural security measures, as well as the pre- and post-election audit features.
The video also shows why paper ballots are hardly a solution, as the second video shows how the malware included in the software can be set to void out legitimate votes and replace them with fake votes, in a variety of different scenarios, almost all of which are likely to go undetected. This is a hugely damning report -- and it comes against a company that has fought so hard against having its machines tested by independent security experts. While some may say that this shows why they didn't want it tested -- it should concern anyone who believes in free and fair democratic elections that we're using such insecure voting machines.
from the security-through-obscurity...-and-legal-threats dept
It's amazing to watch just how sensitive some companies are concerning the rather well-known security vulnerabilities associated with RFID tags and smart cards. We've seen time and time again, companies try to suppress such research from getting published -- and every single time, those efforts to suppress the publication of the vulnerabilities backfires, often badly.
But that never seems to stop companies from flexing their legal muscles.
Texas Instruments comes on along with chief legal counsel for American Express, Visa, Discover, and everybody else... They were way, way outgunned and they absolutely made it really clear to Discovery that they were not going to air this episode talking about how hackable this stuff was, and Discovery backed way down being a large corporation that depends upon the revenue of the advertisers. Now it's on Discovery's radar and they won't let us go near it.
Check out the video of him saying this (while admitting he's probably not supposed to talk about it) here:
Perhaps it's an exaggeration by Savage, but do the credit card companies really think that security through obscurity (with a healthy dose of legal threats) is the best way to protect their customers?
Consider me to be in a state of shock. For nearly half a decade Diebold has always responded in the identical way to every single report of a problem or security vulnerability with its e-voting machines: attacking those who pointed out the problem and claiming it really wasn't a problem at all. This has happened time and time again that I'm not even sure how to react when the company (renamed Premier to get away from the Diebold name stigma) has finally admitted that its machines have a flaw that drops votes. Oops. It's warning 34 states that use the machines of the problem which was highlighted in the lawsuit Ohio filed against Premiere/Diebold. Not only that, but it's admitting the flaw in the software has been in the software for the past decade.
It should also make us question Premier/Diebold's longstanding claim that independent outsiders should not be allowed to inspect its machines for problems. Of course, Diebold execs are already downplaying all of this, claiming that they were "confident" that this hadn't actually impacted any elections, though they offer no proof of that. The company's president admits he's "distressed" that they were wrong in their previous analysis, but he fails to explain why the company is so against letting outsides inspect the machines to avoid such flaws. In the meantime, the company insists that the problem will be patched in time for the November election, and I'm sure we're all confident that there won't be any other problems with their machines, right?
While it took about a week and a half, a judge has now lifted the gag order that had prevented some MIT students from sharing a presentation about vulnerabilities in the Boston subway system. The judge refused to ban the students from talking about it for a period of five months (which the MBTA insisted it needed to fix the system). This is definitely a win for free speech, though I'm sure the debate over how and when to disclose security vulnerabilities will continue for a long, long time.
We recently wrote about how NXP Semiconductor (formerly Philips Semiconductor) was suing to try to stop the publication of some research that showed some vulnerabilities in its chips used in smart cards around the world. The vulnerability itself was already widely known (though NXP denied it for a while). The good news is that a judge has denied the request, and the research will be published as originally planned. The bad news is that NXP wasted quite a lot of time denying there was a problem instead of fixing the problem -- and with this latest misguided legal stunt, made sure a lot more people knew about it.
Rich Kulawiec writes in to point out that security expert Dan Geer is suggesting that merchants violate the security of customers they deem as security risks. His argument is, basically, that there are two types of users out there: those who respond "yes" to any request -- and therefore are likely to be infected by multiple types of malware doing all sorts of bad things -- and those who respond "no" to any request, who are more likely to be safe. Thus, Geer says merchants should ask users if they want to connect over an "extra special secure connection," and if they respond "yes," you assume that they respond yes to everything and therefore are probably unsafe. To deal with those people, Geer says, you should effectively hack their computer. It won't be hard, since they're clearly ignorant and open to vulnerabilities -- so you just install a rootkit and "0wn" their machine for the duration of the transaction.
As Kulawiec notes in submitting this: "Maybe he's just kidding, and the sarcasm went right over my (caffeine-starved) brain. I certainly hope so, because otherwise there are so many things wrong with this
that I'm struggling to decide which to list first." Indeed. I'm not sure he's kidding either, but the unintended consequences of violating the security of someone's computer, just because you assume they've been violated previously are likely to make things a lot worse. This seems like a suggestion that could have the same sort of negative unintended consequences as the suggestion others have made about creating "good trojans" that go around automatically closing the security holes and stopping malware by using the same techniques employed by the malware. Both are based on the idea that people are too stupid to cure themselves, and somehow "white hat" hackers can help fix things. Now, obviously, plenty of people do get infected -- but using that as an excuse to infect them back, even for noble purposes, is only going to create more problems in the long run. Other vulnerabilities will be created and you're trusting these "good" hackers to do no harm on top of what's been done already, which is unlikely to always be the case. No, security will never be perfect and some people will always be more vulnerable -- but that shouldn't give you a right to violate their security, even if for a good reason.
This has not been a good week for e-voting companies. First came the report out of California that the security had problems on every machine tested by independent security experts, followed quickly by security experts finding problems with other machines in Florida. This should come as no surprise. Every time a security expert seems to get a chance to check out these machines, they find problems. What was odd, though, about the announcement on Monday coming out of California, was that the state had only released some of the reports. It left out the source code review. However, late Thursday, the source code reports were finally released and things don't look much better. Apparently all of the e-voting machines are vulnerable to malicious attacks that could "affect election outcomes." The report also points out: "An attack could plausibly be accomplished by a single skilled individual with temporary access to a single voting machine. The damage could be extensive -- malicious code could spread to every voting machine in polling places and to county election servers." This, of course, is what others have been saying for years, and which Diebold always brushes off. Ed Felten has gone through the reports and is amazed to find that all of the e-voting machines seem to have very similar security problems -- and that many problems that Diebold had insisted it fixed in 2003 were still present. Remember how Diebold had used the master password "1111" in their machines? Now their machines use hard-coded passwords like "diebold" and (I kid you not) "12345678." At some point, isn't it time for Diebold (and the other e-voting machine makers) to stand up and admit that their machines aren't secure and, in fact, were never secure? At the very least, the company owes the world a huge apology -- but somehow, given its past behavior whenever its machines are shown as insecure, that seems unlikely to happen.