It what may be one of the more ridiculous reactions to the latest (failed) attempts at putting bombs on airplanes, some security consultants are suggesting the ridiculously confused idea that law enforcement may use this as a reason to no longer allow WiFi or mobile phone connectivity on airplanes. The idea behind this is that by adding connectivity, you can now provide remote access to a bomb, and set it off:
In-flight Wi-Fi "gives a bomber lots of options for contacting a device on an aircraft", Alford says. Even if ordinary cellphone connections are blocked, it would allow a voice-over-internet connection to reach a handset.
"If it were to be possible to transmit directly from the ground to a plane over the sea, that would be scary," says Alford's colleague, company founder Sidney Alford. "Or if a passenger could use a cellphone to transmit to the hold of the aeroplane he is in, he could become a very effective suicide bomber."
But... if you actually think about it for more than a few seconds, this makes almost no sense. First of all, that final sentence makes no sense at all. A suicide bomber on an airplane can already do this. They don't even have to use a cellular network, but any one of plenty of remote wireless options to set up a network between themselves and a bomb stowed away somewhere. Furthermore, they could already use cellular networks (if they're flying over land where such networks exist) -- just not legally. But somehow I doubt a terrorist intent on blowing up an airplane cares about following the FCC rules on using mobile phones on airplanes. As for the terrorist on the ground using WiFi to remotely connect to a bomb... again that's an unlikely scenario. While it's possible that someone could configure such a bomb to automatically log itself on to an in-flight WiFi system, it would still need to figure out how to get through the sign-on and payment setup. Possible? Perhaps. Likely? Not really. It would seem like there are much more reasonable options -- again, such as just using the existing cellular networks. Hopefully this is the idle speculation of these "consultants," rather than anything that any law enforcement agency is taking seriously. But, then again, these are the same law enforcement agencies that make me remove my shoes every time I want to fly.
Late last week, of course, Google 'fessed up to the fact that it was accidentally collecting some data being transmitted over open WiFi connections with its Google Street View mapping cars. As we noted at the time, it was bad that Google was doing this and worse that they didn't realize it. However, it wasn't nearly as bad as some have made it out to be. First of all, anyone on those networks could have done the exact same thing. As a user on a network, it's your responsibility to secure your connection. Second, at best, Google was getting a tiny fraction of any data, in that it only got a quick snippet as it drove by. Third, it seemed clear that Google had not done anything with that collected data. So, yes, it was not a good thing that this was done, but the actual harm was somewhat minimal -- and, again, anyone else could have easily done the same thing (or much worse).
That said, given the irrational fear over Google collecting any sort of information in some governments, this particular bit of news has quickly snowballed into investigations across Europe and calls for the FTC to get involved in the US. While one hopes that any investigation will quickly realize that this is not as big a deal as it's being made out to be, my guess is that, at least in Europe, regulators will come down hard on Google.
However, going to an even more ridiculous level, the class action lawyers are jumping into the game. Eric Goldman points us to a hastily filed class action lawsuit filed against Google over this issue. Basically, it looks like the lawyers found two people who kept open WiFi networks, and they're now suing Google, claiming that its Street View operations "harmed" them. For the life of me, I can't see how that argument makes any sense at all. Here's the filing:
Basically, you have two people who could have easily secured their WiFi connection or, barring that, secured their own traffic over their open WiFi network, and chose to do neither. Then, you have a vague claim, with no evidence, that Google somehow got their traffic when its Street View cars photographed the streets where they live. As for what kind of harm it did? Well, there's nothing there either.
My favorite part, frankly, is that one of the two people involved in bringing the lawsuit, Vicki Van Valin, effectively admits that she failed to secure confidential information as per her own employment requirements. Yes, this is in her own lawsuit filing:
Van Valin works in the high technology field, and works from her home over her internet-connect computer a substantial amount of time. In connection with her work and home life, Van Valin transmits and receives a substantial amount of data from and to her computer over her wireless connection ("wireless data"). A significant amount of the wireless data is also subject to her employer's non-disclosure and security regulations.
Ok. So your company has non-disclosure and security regulations... and you access that data unencrypted over an unencrypted WiFi connection... and then want to blame someone else for it? How's that work now? Basically, this woman appears to be admitting that she has violated her own company's rules in a lawsuit she's filed on her behalf. Wow.
While there's nothing illegal about setting up an open WiFi network -- and, in fact, it's often a very sensible thing to do -- if you're using an open WiFi network, it is your responsibility to recognize that it is open and any unencrypted data you send over that network can be seen by anyone else on the same access point.
This is clearly nothing more than a money grab by some people, and hopefully the courts toss it out quickly, though I imagine there will be more lawsuits like this one.
Germany's top criminal court ruled Wednesday that Internet users need to secure their private wireless connections by password to prevent unauthorized people from using their Web access to illegally download data.
Internet users can be fined up to euro100 ($126) if a third party takes advantage of their unprotected WLAN connection to illegally download music or other files, the Karlsruhe-based court said in its verdict.
"Private users are obligated to check whether their wireless connection is adequately secured to the danger of unauthorized third parties abusing it to commit copyright violation," the court said.
This is backwards in so many ways. First, open WiFi is quite useful, and requiring a password can be a huge pain, limiting all sorts of individuals and organizations who have perfectly good reasons for offering free and open WiFi. Second, fining the WiFi hotspot owner for actions of users of the service is highly troubling from a third party liability standpoint. The operator of the WiFi hotspot should not be responsible for the actions of users, and it's troubling that the German court would find otherwise. This is an unfortunate ruling no matter how you look at it.
from the yeah,-because-the-eavesdroppers-care dept
The big news in security circles this week is the fact that a security researcher claims to have cracked the encryption used to keep GSM mobile phone calls private. It looks like he and some collaborators used a brute force method. He admits that it requires about $30,000 worth of equipment to de-crypt calls in real-time, but that's pocket change for many of the folks who would want to make use of this. What's much more interesting (and worrisome) is the GSM Association's (GSMA) response to this news:
"This is theoretically possible but practically unlikely," said Claire Cranton, an association spokeswoman. She said no one else had broken the code since its adoption. "What he is doing would be illegal in Britain and the United States. To do this while supposedly being concerned about privacy is beyond me."
There are so many things wrong with that statement it's hard to know where to begin. First, claiming it's "theoretically possible, but practically unlikely" means that it's very, very possible and quite likely. To then say that no one else had broken the code since its adoption fifteen years ago is almost certainly false. What she means is that no one else who's broken the code has gone public with it -- probably because it's much more lucrative keeping that info to themselves. Next, blaming the messenger by announcing that cracking the code is "illegal in Britain and the United States" is not what anyone who uses a GSM phone should want to hear. They should want to know how the GSMA is responding and fixing the problem -- not how they're responding to the public release. Finally, if it's "beyond" her why cracking a code used for private conversations and showing that it's insecure is all about being concerned about "privacy" -- she should be looking for a different job. This has everything to do with privacy. The GSMA claims that the code is secure for private conversations, and this group of folks is showing that it is not. That seems to have everything to do with privacy.
Last year, it became clear that REAL ID was dead on arrival as pretty much everyone was against it, and states were refusing to implement it. With the changing of the administration, it seemed like REAL ID was finally going to die completely... but apparently not just yet. EFF alerts folks to the fact that the same concept has basically been reintroduced under the name PASS ID, as if that would trick people:
The plan sounds equally as bad and unnecessary:
Proponents seem to be blind to the systemic impotence of such an identification card scheme. Individuals originally motivated to obtain and use fake IDs will instead use fake identity documents to procure "real" drivers' licenses. PASS ID creates new risks -- it calls for the scanning and storage of copies of applicants' identity documents (birth certificates, visas, etc.). These documents will be stored in databases that will become leaky honeypots of sensitive personal data, prime targets for malicious identity thieves or otherwise accessible by individuals authorized to obtain documents from the database. Despite some alterations to the scheme, PASS ID is still bad for privacy in many of the same ways the REAL ID was.
But why let that stop the gov't from coming up with more ways to keep tabs on you?
If you have an old Nokia 1100 phone, maybe it's time to dust it off and try selling it in Germany where hackers claim to have figured out a way to use certain Nokia phones to steal authentication codes for bank transactions. There are a few reports that these old phones (if they were made in a very specific factory, not just any old model...) are selling for ridiculous amounts -- ranging from $700 to $30,000 -- presumably because the handsets are so hard to find and are valuable to hackers prone to crime. So far, Nokia says it can't imagine any way for these old phones to be hacked for banking fraud. But not surprisingly, security vendors are quick to point out the plausibility of this type of phone hacking -- since security firms can obviously benefit from unfounded fears that encourage consumers to buy security software regardless of the actual need for it. Is it really that hard to ask a security vendor what the likelihood would be for a criminal to actually succeed in such a scam? Hopefully, the odds of actually stealing any money with these ancient phones are approaching zero -- especially now that the tools to implement the fraud are known and apparently getting quite expensive. Perhaps the real suckers in this story are the gullible hackers who are buying old phones in shady forums for prices that are well more than the phones are worth?
There was plenty of news over the weekend about a security flaw found in Google's Android mobile operating system that could allow certain websites to run attack code and access sensitive data. The security researchers have said they won't reveal the details of the flaw, even though it's apparently a known flaw that is in some of the open source code in Android that Google did not update. However, that didn't stop Google from attacking the messenger, claiming that the security researcher who discovered the flaw broke some "unwritten rules" concerning disclosure. First of all, there is no widespread agreement on any such "unwritten rules" and many security researchers believe that revealing such flaws is an effective means of getting companies to patch software. Considering that Android's source code was revealed last week, it's quite reasonable to assume that many malicious hackers had already figured out this vulnerability, and making that news public seems to serve a valuable purpose. It's unfortunate that Google chose to point fingers, rather than thanking the researcher and focus on patching the security hole.
In the last year or so, there's been a disturbing trend of companies to start adding absolutely ridiculous and counterproductive "security" questions on various sites. Most of these do absolutely nothing good in terms of security. In fact, it seems the more ridiculous these features are, the less secure a site actually is. I've been collecting some examples of the more bizarre "security" features I've been seeing lately, with the really ridiculous "security questions" being quite popular. This is when the site gives you a bunch of questions to choose from -- but often those questions are not the sort that have a single answer, or an answer that's easily memorable. For example, I just saw one that asked "What's a place you'd like to visit someday?" Well, there are a few, but I doubt I could remember the one I picked. And what happens if I do visit that place before the next time I need to answer that question?
I was recently discussing this with a colleague who told me that if I wanted to see the most ridiculous example, I should look at Sprint's system, as it had a bunch of security questions where it tried to pull information on you. Before I had a chance to check it out, it looks like the folks over at Consumerist decided to take on Sprint, and discovered not just how ridiculous the questions are but noticed some patterns that make it quite easy to get control of any Sprint user's account.
The way it works is Sprint asks you a series of "security" questions that it thinks only you would know the answer to. Things like "what type of car has been registered at your address?" and "which of the following people has lived at your address?" It sounds like some data collection company probably convinced Sprint to purchase access to their data to set up these questions in the name of "security." The problem is that if you know just a little about certain people, you can easily guess the answers. Even worse, a former Sprint employee notes that, mostly to avoid "accidentally" having two right answers, it's usually quite easy to figure out the actual answers. For example, on the automobile question, the incorrect answers are usually expensive luxury vehicles.
This isn't "security." It's barely security theater. It's a huge security hole. Hopefully with a little attention Sprint gets rid of it and puts something more reasonable in place. I just hope it doesn't involve asking me where I hope to travel some day.
Bruce Schneier, one of the sharpest people in the computer security world, has a great piece about why he leaves his home wireless network open for anyone to use. When I wrote something similar a couple of years ago, I caught a lot of flack from people who said that I was opening myself up to security risks, either from people downloading child pornography with my connection or from people hacking into my home computers and stealing my data. But as Schneier points out, neither of these risks is unique to your home wireless network. Like Schneier, I've got several restaurants and coffee shops within walking distance of my apartment that offer free wi-fi access. While it's not impossible that somebody would park their car out in front of my street and use my Internet connection to do something illegal, it seems more likely that they'd do so over a cup of coffee in one of the nearby coffee shops, where they wouldn't evoke suspicion. Moreover, I have a laptop and I visit coffee shops and other locations with open wi-fi connections all the time. If my laptop has security vulnerabilities, I should be a lot more worried about getting cracked on those networks (which make it easy to target a bunch of people at once) than that I'll have the bad luck of living next to a cracker. I need to keep my laptop properly locked down in any event. Once I've done that, an open wi-fi network is a fairly minor risk. Finally, Schneier closes by pointing out that security is a trade-off. If perfect security is your standard, you shouldn't connect to the Internet at all, because there's always a risk of a security breach. Given that we're willing to accept some level of risk if we have a good reason, the question we should be asking is about the relative risks of different activities. The risk of leaving your wireless network open isn't zero, but it's probably small.
Now, I should point out that all of this assumes that you're a reasonably technically savvy individual with an understanding of basic security concepts: that you know how to update your operating system on a regular basis and that you've set the administrative password on your access point to a non-default value. If you're a complete networking neophyte (not that many of those probably read Techdirt), you should probably get some advice from someone more technically savvy about good Internet security practices. Actually, you should do that whether or not you choose to open your wireless network. But on the list of potential network security threats, an open wi-fi network is probably pretty low on the list.
Wired is running an article about FAA concerns about the computer networks on Boeing's new 787. Apparently, the airplanes have been designed with a computer network in the passenger area that can give fliers internet access. That seems reasonable enough. However, somewhere along the way, someone at Boeing decided to connect that network to the plane's control, navigation and communication systems. It's hard to fathom how anyone would ever consider connecting a general passenger network on an airplane to critical systems that actually deal with issues related to keeping the airplane in the sky. Boeing's response is less than satisfactory as well. While it claims it's fixing some of the issues raised, it also says the report is overblown, noting: "There are places where the networks are not touching, and there are places where they are." That really doesn't matter. If the network is touching anywhere it should be seen as a fairly serious problem. There's simply no good reason to connect the two in any way, no matter how "secure." Glenn Fleishman is saying that this report is Wired making a mountain out of a molehill, and insists that the story is probably not a big deal at all. Yet, I'm still wondering why the two systems would ever touch each other.