This is a bit crazy. After a security researcher pointed out that Nokia's Xpress Browser is basically running a giant man in the middle attack on any encrypted HTTPS data you transmit, the company played the whole situation down by saying, effectively, sure, that's what we do, but it's not like we look at anything. This is, to put it mildly, not comforting. Just the fact that they're running a man in the middle attack in the first place is immensely concerning. The reason they do it is that this is a proxy browser, similar to Opera, that tries to speed up browsing by proxying a lot of the content -- meaning that all of your surfing goes through their servers. In some cases, this can be much faster for mobile browsing. But, the right way to do such a thing is to only do the proxying on unencrypted traffic. With encrypted traffic, you're just asking for trouble.
After sensing the backlash, Nokia pushed out an update of the browser that appears to remove the man-in-the-middle attack, even as it had tried to claim there was nothing wrong in the first place. However, the original researcher who discovered this, Gaurang K Pandya, updated his post to note that it's not all good news.
Just upgraded my Nokia browser, the version now is 18.104.22.168.48, and as expected there is a change in HTTPS behaviour. There is a good news and a bad news. The good news is with this browser, they are no more doing Man-In-The-Middle attack on HTTPS traffic, which was originally the issue, and the bad news is the traffic is still flowing through their servers. This time they are tunneling HTTPS traffic over HTTP connection to their server
We've had plenty of stories concerning open WiFi, and there seems to be a general opinion among some that open WiFi is "a bad thing." Some have even tried (and failed) to argue that having an open WiFi network makes you negligent. In some areas, law enforcement has even gone around telling people to lock up their WiFi. Those who argue against open WiFi are generally conflating different issues. It is true that if you use an open WiFi network without securing yourself you do open up yourself to snooping from others. Similarly, if others are using your open WiFi, it it could lead to at least an investigation if your access point is used for nefarious purposes. But combining those to claim that open WiFi itself is bad or illegal is a mistake. It is entirely possible to secure your own activities, and to set up an open WiFi network in a reasonable manner that minimizes any such threat.
The EFF and others have been trying to remind people that there are also tremendous benefits to open WiFi in increasing connectivity for everyone. As part of this, they've launched the Open Wireless Movement encouraging people to purposely leave their WiFi networks open (and to take appropriate security precautions). They're pointing out that especially in times of crisis, such open networks can be tremendously useful.
The Open Wireless Movement envisions a world where people readily have access to open wireless Internet connections—a world where sharing one's network in a way that ensures security yet preserves quality is the norm. Much of this vision is attainable now. In fact, many people have routers that already feature "guest networking" capabilities. To make this even easier, we are working with a coalition of volunteer engineers to build technologies that would make it simple for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access. And we're working with advocates to help change the way people and businesses think about Internet service.
We're also teaching the world about the many benefits of open wireless in order to help society move away from closed networks and to a world in which open access is the default. We are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.
Hopefully we can finally get past the myth that open WiFi is automatically bad and get people moving towards a better understanding of how to use the internet safely while still offering up open access in a reasonable manner.
Jon Brodkin, over at Ars Technica, has an interesting discussion about a paper from some researchers suggesting that we could augment first responder communications efforts by letting them make use of the public's WiFi routers. Basically, if I understand the proposal correctly, if turned on, it would make use of your router to try to form an ad hoc mesh network with other, similar routers in the area that, in theory would only be used by those public safety first responders. It's no secret that there are efforts underway to make sure that emergency personnel have better access to communications spectrum, and this is, at the very least, a creative way of attacking the problem.
The theory is that this doesn't impinge on anyone's security, because it would effectively carve out a separate service on the router, not unlike home WiFi routers that offer up different logins for residents and "guests." Of course, theory and reality aren't always one and the same, and Brodkin reached out to Bruce Schneier who raised his concerns:
“The problems are the same,” Schneier told Ars. “Once you build such a system, you have to build the security to ensure that only the good guys use it. And that's not an easy task. It is far more secure not to have the capabilities in the first place.”
That said, if such a system were purely voluntary, and individuals were able to offer up such connectivity for first responders (or even for anyone else), would that necessarily be so bad? I've been skeptical in the past of attempts to create truly comprehensive mesh networks building on people's home WiFi routers, and there hasn't been much success there. But, perhaps there's something interesting in special use cases, such as one involving first responders. I agree with Schneier that there could be some risks, but I'm not sure how they would be much different than running a basic guest access WiFi network that doesn't involve a password. As long as you're not using that network for sensitive and unencrypted info, it seems like a similar level of risk.
Last fall, we wrote about some plans by the police in Austin, Texas to go wardriving to find open WiFi networks and pressure people into locking up those networks. After a bunch of people got upset about this, noting that open WiFi isn't a crime, the police backed down. However, it appears other police don't have any such qualms. As pointed out by Slashdot, police in Queensland, Australia are doing a similar wardriving campaign. The official announcement of the program greatly exaggerates the risk here:
Detective Superintendent Brian Hay said police have identified a large number of homes and businesses within the greater Brisbane area with wireless connections that are not secure or have limited protection. These people may as well put their bank account details, passwords and personal details on a billboard on the side of the highway.
Except that's really not necessarily true. Banks and most sites that require passwords have long known to make use of SSL encryption. It's not perfect, but it's not posting your password on a billboard on the side of the highway by a long shot.
“Unprotected or unsecured wireless networks are easy to infiltrate and hack. Criminals can then either take over the connection and commit fraud online or steal the personal details of the owner. This is definitely the next step in identity fraud.”
That could be true in some cases, but it's not absolutely true, and plenty of people can be perfectly safe using open WiFi with a few common sense precautions. It's sad that the police would exaggerate like this.
It appears that Apple is the latest company to take a "kill the messenger" approach to security vulnerabilities. Hours after security researcher Charlie Miller found a huge vulnerability in iOS, which would allow malicious software to be installed on iOS devices, Apple responded by taking away his developer's license.
The obvious implication: don't search for security vulnerabilities in Apple products, and if you do find them, keep them to yourself.
First off, here's Miller explaining the security hole:
To be fair, Miller did get Apple to approve an app that he was using to demo the security flaw. However, kicking him out of its developer program is exactly the wrong response. Miller, clearly, was not looking to use the code maliciously -- just demoing a problem with their system. In other words, he was helping Apple become more secure, and they punished him for it. The message seems to be that Apple doesn't want you to help make their system more secure. Instead, they'd rather let the malicious hackers run wild. As Miller noted to Andy Greenberg at Forbes (the link above):
“I’m mad,” he says. “I report bugs to them all the time. Being part of the developer program helps me do that. They’re hurting themselves, and making my life harder.”
And, no, this is not a case where he went public first either. He told Apple about this particular bug back on October 14th. Either way, this seems like a really brain-dead move by Apple. It's only going to make Apple's systems less secure when it punishes the folks who tell it about security vulnerabilities.
It what may be one of the more ridiculous reactions to the latest (failed) attempts at putting bombs on airplanes, some security consultants are suggesting the ridiculously confused idea that law enforcement may use this as a reason to no longer allow WiFi or mobile phone connectivity on airplanes. The idea behind this is that by adding connectivity, you can now provide remote access to a bomb, and set it off:
In-flight Wi-Fi "gives a bomber lots of options for contacting a device on an aircraft", Alford says. Even if ordinary cellphone connections are blocked, it would allow a voice-over-internet connection to reach a handset.
"If it were to be possible to transmit directly from the ground to a plane over the sea, that would be scary," says Alford's colleague, company founder Sidney Alford. "Or if a passenger could use a cellphone to transmit to the hold of the aeroplane he is in, he could become a very effective suicide bomber."
But... if you actually think about it for more than a few seconds, this makes almost no sense. First of all, that final sentence makes no sense at all. A suicide bomber on an airplane can already do this. They don't even have to use a cellular network, but any one of plenty of remote wireless options to set up a network between themselves and a bomb stowed away somewhere. Furthermore, they could already use cellular networks (if they're flying over land where such networks exist) -- just not legally. But somehow I doubt a terrorist intent on blowing up an airplane cares about following the FCC rules on using mobile phones on airplanes. As for the terrorist on the ground using WiFi to remotely connect to a bomb... again that's an unlikely scenario. While it's possible that someone could configure such a bomb to automatically log itself on to an in-flight WiFi system, it would still need to figure out how to get through the sign-on and payment setup. Possible? Perhaps. Likely? Not really. It would seem like there are much more reasonable options -- again, such as just using the existing cellular networks. Hopefully this is the idle speculation of these "consultants," rather than anything that any law enforcement agency is taking seriously. But, then again, these are the same law enforcement agencies that make me remove my shoes every time I want to fly.
Late last week, of course, Google 'fessed up to the fact that it was accidentally collecting some data being transmitted over open WiFi connections with its Google Street View mapping cars. As we noted at the time, it was bad that Google was doing this and worse that they didn't realize it. However, it wasn't nearly as bad as some have made it out to be. First of all, anyone on those networks could have done the exact same thing. As a user on a network, it's your responsibility to secure your connection. Second, at best, Google was getting a tiny fraction of any data, in that it only got a quick snippet as it drove by. Third, it seemed clear that Google had not done anything with that collected data. So, yes, it was not a good thing that this was done, but the actual harm was somewhat minimal -- and, again, anyone else could have easily done the same thing (or much worse).
That said, given the irrational fear over Google collecting any sort of information in some governments, this particular bit of news has quickly snowballed into investigations across Europe and calls for the FTC to get involved in the US. While one hopes that any investigation will quickly realize that this is not as big a deal as it's being made out to be, my guess is that, at least in Europe, regulators will come down hard on Google.
However, going to an even more ridiculous level, the class action lawyers are jumping into the game. Eric Goldman points us to a hastily filed class action lawsuit filed against Google over this issue. Basically, it looks like the lawyers found two people who kept open WiFi networks, and they're now suing Google, claiming that its Street View operations "harmed" them. For the life of me, I can't see how that argument makes any sense at all. Here's the filing:
Basically, you have two people who could have easily secured their WiFi connection or, barring that, secured their own traffic over their open WiFi network, and chose to do neither. Then, you have a vague claim, with no evidence, that Google somehow got their traffic when its Street View cars photographed the streets where they live. As for what kind of harm it did? Well, there's nothing there either.
My favorite part, frankly, is that one of the two people involved in bringing the lawsuit, Vicki Van Valin, effectively admits that she failed to secure confidential information as per her own employment requirements. Yes, this is in her own lawsuit filing:
Van Valin works in the high technology field, and works from her home over her internet-connect computer a substantial amount of time. In connection with her work and home life, Van Valin transmits and receives a substantial amount of data from and to her computer over her wireless connection ("wireless data"). A significant amount of the wireless data is also subject to her employer's non-disclosure and security regulations.
Ok. So your company has non-disclosure and security regulations... and you access that data unencrypted over an unencrypted WiFi connection... and then want to blame someone else for it? How's that work now? Basically, this woman appears to be admitting that she has violated her own company's rules in a lawsuit she's filed on her behalf. Wow.
While there's nothing illegal about setting up an open WiFi network -- and, in fact, it's often a very sensible thing to do -- if you're using an open WiFi network, it is your responsibility to recognize that it is open and any unencrypted data you send over that network can be seen by anyone else on the same access point.
This is clearly nothing more than a money grab by some people, and hopefully the courts toss it out quickly, though I imagine there will be more lawsuits like this one.
Germany's top criminal court ruled Wednesday that Internet users need to secure their private wireless connections by password to prevent unauthorized people from using their Web access to illegally download data.
Internet users can be fined up to euro100 ($126) if a third party takes advantage of their unprotected WLAN connection to illegally download music or other files, the Karlsruhe-based court said in its verdict.
"Private users are obligated to check whether their wireless connection is adequately secured to the danger of unauthorized third parties abusing it to commit copyright violation," the court said.
This is backwards in so many ways. First, open WiFi is quite useful, and requiring a password can be a huge pain, limiting all sorts of individuals and organizations who have perfectly good reasons for offering free and open WiFi. Second, fining the WiFi hotspot owner for actions of users of the service is highly troubling from a third party liability standpoint. The operator of the WiFi hotspot should not be responsible for the actions of users, and it's troubling that the German court would find otherwise. This is an unfortunate ruling no matter how you look at it.
from the yeah,-because-the-eavesdroppers-care dept
The big news in security circles this week is the fact that a security researcher claims to have cracked the encryption used to keep GSM mobile phone calls private. It looks like he and some collaborators used a brute force method. He admits that it requires about $30,000 worth of equipment to de-crypt calls in real-time, but that's pocket change for many of the folks who would want to make use of this. What's much more interesting (and worrisome) is the GSM Association's (GSMA) response to this news:
"This is theoretically possible but practically unlikely," said Claire Cranton, an association spokeswoman. She said no one else had broken the code since its adoption. "What he is doing would be illegal in Britain and the United States. To do this while supposedly being concerned about privacy is beyond me."
There are so many things wrong with that statement it's hard to know where to begin. First, claiming it's "theoretically possible, but practically unlikely" means that it's very, very possible and quite likely. To then say that no one else had broken the code since its adoption fifteen years ago is almost certainly false. What she means is that no one else who's broken the code has gone public with it -- probably because it's much more lucrative keeping that info to themselves. Next, blaming the messenger by announcing that cracking the code is "illegal in Britain and the United States" is not what anyone who uses a GSM phone should want to hear. They should want to know how the GSMA is responding and fixing the problem -- not how they're responding to the public release. Finally, if it's "beyond" her why cracking a code used for private conversations and showing that it's insecure is all about being concerned about "privacy" -- she should be looking for a different job. This has everything to do with privacy. The GSMA claims that the code is secure for private conversations, and this group of folks is showing that it is not. That seems to have everything to do with privacy.
Last year, it became clear that REAL ID was dead on arrival as pretty much everyone was against it, and states were refusing to implement it. With the changing of the administration, it seemed like REAL ID was finally going to die completely... but apparently not just yet. EFF alerts folks to the fact that the same concept has basically been reintroduced under the name PASS ID, as if that would trick people:
The plan sounds equally as bad and unnecessary:
Proponents seem to be blind to the systemic impotence of such an identification card scheme. Individuals originally motivated to obtain and use fake IDs will instead use fake identity documents to procure "real" drivers' licenses. PASS ID creates new risks -- it calls for the scanning and storage of copies of applicants' identity documents (birth certificates, visas, etc.). These documents will be stored in databases that will become leaky honeypots of sensitive personal data, prime targets for malicious identity thieves or otherwise accessible by individuals authorized to obtain documents from the database. Despite some alterations to the scheme, PASS ID is still bad for privacy in many of the same ways the REAL ID was.
But why let that stop the gov't from coming up with more ways to keep tabs on you?
Christopher Best: lol well played Ah, interesting to see the Steve Jackson Games vs. the FBI case coming up in connection to this Silva story. That case pretty much led to the creation of the EFF I hate to think in terms of anything positive involving police beating someone to death, but one could only hope that these sort of cases lead to a similarly long-lasting legacy as creating an organization that fights against seizure of video evidence Jay: Gah, that bottom ad is annoying on my cellphone! dennis deems: Happily there is some good news too: http://arstechnica.com/tech-policy/2013/05/patent-troll-that-wants-1000-per-worker-gets-sued-by-vermont-a-g/ this is the scanner troll silverscarcat: Well... http://www.extremetech.com/gaming/156515-kinect-for-xbox-one-an-always-on-works-in-the-dark-camera-and-microphone-what-could-possibly-go-wrong dennis deems: why would anybody want that? silverscarcat: idk So, I just erased my outlook account. I refuse to use that service or allow it to host anything of mine. Already got it on Thunderbird. Tis sad, over a decade worth of memories, and Outlook comes along and screws everything up. dennis deems: http://t.co/CqGZaCofFb The Lonely Island - SEMICOLON (feat. Solange) for you @ssc (NSFW) Rikuo: about the Xbox One reveal - there were quite a few people watching the reveal live on their Xbox 360's, with their Kinect 1.0's attached...and everytime the Microsoft exec demonstrated a feature by saying "Xbox, TV" or whatever the viewer's Kinect 1.0 would pick up the voice command and respond accordingly...by pausing or stopping the stream and going to the TV mode. I find that hilarious I also find the concept of Kinect 2.0 hilarious. So if you've got a bunch of people on the couch watching a movie...don't move a muscle. Stare blankly. Don't move your arms at all or say anything, or the Kinect 2.0 will think you're giving it a command. If you move your arm back to point to the liquor cabinet to tell the wife to pour you a shot of whiskey, the Xbox One will think you're swiping silverscarcat: *Spies something interesting in the Crystal Ball* Well, that's interesting. I'm not sure what to think. Honestly, I'm not a big fan of the guy, but considering what the gov't did, I support him in that endeavor, but this... Seems to go too far.