Google Attacks The Messenger Over Android Vulnerability

from the not-very-friendly dept

There was plenty of news over the weekend about a security flaw found in Google’s Android mobile operating system that could allow certain websites to run attack code and access sensitive data. The security researchers have said they won’t reveal the details of the flaw, even though it’s apparently a known flaw that is in some of the open source code in Android that Google did not update. However, that didn’t stop Google from attacking the messenger, claiming that the security researcher who discovered the flaw broke some “unwritten rules” concerning disclosure. First of all, there is no widespread agreement on any such “unwritten rules” and many security researchers believe that revealing such flaws is an effective means of getting companies to patch software. Considering that Android’s source code was revealed last week, it’s quite reasonable to assume that many malicious hackers had already figured out this vulnerability, and making that news public seems to serve a valuable purpose. It’s unfortunate that Google chose to point fingers, rather than thanking the researcher and focus on patching the security hole.

Filed Under: , , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Google Attacks The Messenger Over Android Vulnerability”

Subscribe: RSS Leave a comment
30 Comments
Bob says:

Flaws and Patches

First, Android was built from FOSS, so it was required to be given back to FOSS.
Second, flaws need to be illuminated in the FOSS world faster than in the “proprietary” world. More eyes on the code.
Let Google whine they have built a multi billion dollar company on FOSS and FOSS tools.

Google put someone on it and fix the problem already!

ehrichweiss says:

Re: Google..

Sadly, I think this convinces me that Google isn’t playing for our team any more. It’s very sad because I had always given them the benefit of a doubt but from now on I’m going to scrutinize Google’s every action with a completely different view.

Also sad is that I’m certain that the apocalypse is going to become a self-fulfilling prophecy thanks to the fundamentalists rising in the ranks of our governments.

jonnyq says:

FOSS

“Charlie Miller, the man who discovered the Android flaw, has followed this path in the past, most notably when he sold details of a flaw in the Linux kernel to the U.S. National Security Agency for $50,000”

That’s just bad form. Not saying I wouldn’t do it, but it’s bad form. In open source, it’s better to just file a bug in the bug tracking system as a security bug and let the handlers respond before going to the papers. Mozilla even pays a bug bounty for this. It’s more like $500 instead of $50000, but people complain less.

I’m assuming that Google has a maintained Bugzilla-style system. That may not be the case.

Anonymous Coward says:

Not trusting the computer

Maybe that’s the problem– that these “Flaws”, “bugs” and the like are actually “asks” from the NSA/CIA, to get into your machine. SvcHost keeps wanting to connect to some happy IP address owned by XO communications in Virginia every hour.

I’ve never seen so many bug patches in my life as in the past 6 months. Then they won’t tell you the details of what the update does. Maybe it does nothing but provide access to your files.

I remain curious what the top CIA guy meant when he said the only safe computer was ‘unplugged in a corner and not connected to any network’.

Does he know something we don’t? Thanks Top CIA guy for the heads-up.

TX CHL Instructor (profile) says:

Re: Not trusting the computer

The CIA guy was right. To be useful, you have to actually run the computer, but you can be reasonably secure if you don’t connect it to any network.

I have a customers who is an attorney, for whom I built a client database system (turnkey, including hardware). Just before I loaded his data into the system, I disconnected it from his network, and told him NEVER connect that system to the internet again. And NEVER install any other software on it. I provided several USB flash drives for backups using backup software that I wrote, and told him NEVER put those drives into any other machine except the hot backup that I also provided. I super-glued an RJ45 plug into the ethernet connectors on both systems. As long as he follows those directions, those system cannot be hacked unless somebody gains physical access to them. (Ok, it’s possible for somebody with the right gear to eavesdrop remotely, but the script-kiddies don’t have access to that sort of thing. Yet.)

http://www.chl-tx.com The 2nd Amendment isn’t about hunting ducks.

Dosquatch says:

Re: Not trusting the computer

I remain curious what the top CIA guy meant when he said the only safe computer was ‘unplugged in a corner and not connected to any network’.

What he means is that the only foolproof security is total inaccessability, and not just with computers. Any lock can be defeated. Any wall can be breached. The only way to be certain is to put it where the locks, doors, and walls themselves cannot be reached. Launched to the moon, for example.

So, too, with computers. As long as it is powered up waiting, and will accept you logging in, your security can be breached. All one needs is to know enough to fool the computer into believing the attacker is you.

nasch says:

Re: Re: Not trusting the computer

Any lock can be defeated. Any wall can be breached. The only way to be certain is to put it where the locks, doors, and walls themselves cannot be reached. Launched to the moon, for example.

If you can launch it to the moon, then somebody else can launch themselves to the moon and go get it and break into it. I think when we’re talking about security here, we mean remote access security. Clearly no physical security system is impenetrable, but that’s not the point. The point is no computer security system is impenetrable either, so the only perfect protection from remote exploits is to completely disconnect the computer from all networks.

Anonymous Coward says:

I remain curious what the top CIA guy meant when he said the only safe computer was ‘unplugged in a corner and not connected to any network’.

Does he know something we don’t? Thanks Top CIA guy for the heads-up.

It makes sense to me – with a slight paranoid touch, you can easily reason that every computer system can eventually be hacked. Therefore, the only way to secure it is to keep it the hell out of the way of the world (“the only safe computer is in the centre of a nuclear explosion” doesn’t have the same ring).

LBD says:

Re: Re:

If your computer’s plugged in, but not plugged into any wireless or eathernet or other methoid of computer to computer communication (I don’t mean has it disabled, I mean no physical connection.) then it’s perfectly safe unless a virus gets onto a data device and kills it.

It still won’t be able to tell your secretes.

JT says:

Re: Re: Re:

lol I laugh when I see something like this. They’re all corporations that answer to a higher power, stock holders. The larger they get, the more diluted they get with “good and evil”. Why people don’t ever seem to get this is beyond me. Those who seem to adamantly defend Apple doesn’t seem to get that they’re also the same.

So start your anti-Google campaigns and wait for your next start-up corporate savior to come along.

Matt says:

False Dilemma

I don’t see how Google can’t have the technical team work on patching the vulnerability while the PR/exec team comments on the situation. I think you’re being a little disingenuous about the etiquette issue; it’s quite well established (see the nearly 15 years of BugTraq archives and this FAQ specifically) for a vendor to receive a private disclosure followed by a brief delay to allow them time to patch it. Sure, Android is open source so a black hat could (or has already) discover it on his own. But if not, releasing it publicly removes any runway the vendor had and turns a vulnerability into into a zero-day exploit.

Dosquatch says:

Re: Re: ummm....

How is Google being evil?

FTFA:

After first dismissing the amount of damage to which the flaw exposed users, anonymous Google executives then attempted to discredit the security researcher,

  1. – pretend the problem isn’t a problem
  2. – paint the researcher as the real problem
  3. – ????
  4. – PROPHET!!1!!!ELEVENTY-ONE!!
Shyptari says:

They're Both Wrong

Okay, in Google’s defense: Yeah, you found the broken code, bravo! Now be a good boy and hand it back tot he owner to fix. Done. But when you just diss them, and go public with it, that is kinda lame. At least tell them that you’re gonna go public. They woulda probably said “About what?!”…”Oh, okay. You do that, we’ll fix it in the process.”
Everyone = happy.

In Charlie Miller’s defense: Google doesn’t have to whine about it. Its code, its flawed just for the fact that people created it. Deal with it. So you didn’t get a heads up, don’t feel bad for yourself. Say to the guy “Why didn’t you tell us first…meh, who cares, lets fix this before it becomes a major problem.” And accept a little hurt pride. Its not about the ego, its about getting it done right. Even if it means everyone knows about it.
Everyone = happy.

My two cents.
But then again, the world doesn’t work my way.

wooster11 (profile) says:

Unwritten Rules

I think the unwritten rules are pretty clear when dealing with security issues in software.

These “Security Experts” (hackers) need to know these rules.

The first step is always to notify the company privately of the security issue to see if they will respond.

It is at that point, when a determination is made to whether or not to go public with the issue.

If a company was not responding to the issue even after being notified (let’s say within 90 days – software development takes time), then the security group has the option to go the public to get the company to move on the issue.

The one good thing I can say is at least the security isn’t releasing details on the flaw, but they still should have gone to Google in private first.

Clueby4 says:

Unwritten Delusions.

Unwritten rules?! For software?! Today’s modern age snake oil!? No such expectation should be present.

I can see the benefits of giving the software companies a heads up, however those companies with bad track records, which Google proud member of, should not be afforded any such courtesy.

I’m not sure I find the selling of the exploits to third parties very palatable, but with the absence of merchantability that the software market throughly enjoys there’s not much one can do about that other then frown 😛

Paulie says:

This is pretty out of character for Google..or at least the way I perceive Google. A quick question: why did they release their source code for Android? Was it for developers? Anyway, it seems that, as mentioned, Google should be glad that this was pointed out publicly because now less people will be clamoring for some security software on their phones, because Google will fix this problem…right Google?

Leave a Reply to LBD Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...