Another Company Thinks The Best Way To Handle A Security Hole Is To Send A Lawyer After The Person Who Discovered It

from the Firmware-Patch-1.3.5,-Esq. dept

Security researcher finds security hole; attempts to report it through proper channels and is ignored/rebuffed/threatened with arrest/lawsuits. Film at 11.

Apparently, handling these sorts of situations in the worst way possible is never not going to be an option.

Security researcher Mike Davis, along with colleagues at IOActive, found a number of security issues with electronic locks made by the Oregon-based firm CyberLock. But after several failed attempts over the last month to disclose the findings to CyberLock and its parent company Videx, they received a letter from CyberLock’s outside law firm, Jones Day, on April 29, a day before they planned to publicly publish their findings.

So, a security researcher did what he was supposed to — research security — and tried to inform the company affected. And now the United State’s largest law firm (a trademark bully with inordinately thin skin) has responded with threats of the mostly-veiled variety. Davis posted this letter to his Google+ account and let his opinion of the legal threats be known through the editorialized file name (asshat0.png).

And, as if to assure everyone that Jones Days’ grasp on intellectual property laws remains less than firm, attorney Jeff Rabkin invokes two very questionable avenues of attack: violation of CyberLock’s licensing agreements and the anti-circumvention statues built into the DMCA. As for the first part, Davis purchased the lock secondhand, which means he’s not subject to CyberLock’s licensing agreements, seeing as he never entered into one by purchasing direct. Secondly, the DMCA contains circumvention exemptions for encryption research and security research, both of which cover Davis’ activities.

This security hole Davis found could be a big problem. The electronic locks the firm manufactures secure all sorts of critical structures.

The systems are used in metro stations in Amsterdam and Cleveland, in water treatment facilities in Seattle and Atlanta, Georgia and at the Temple Terrace Police Department in Florida, among other places. The company’s marketing literature also promotes use of the locks in data centers and airports.

CyberLock pretty much claims its locks are ultra-secure. Davis’ research proves otherwise. According to what he found, the keys are stored in plaintext in the lock’s firmware and this information is transmitted to the key from the lock during the authentication process. This transmission is encrypted, but the encryption used is weak.

With this knowledge in hand, Davis began attempting to contact CyberLock on March 31st. Five more attempts followed but no response was received until the letter from the law firm arrived on April 29th. A second, more aggressive letter followed on May 4th.

Among the things Jones Day attorney Jeff Rabkin took issue with was Davis’ “aggressiveness” in demanding that he only discuss the vulnerability with CyberLock’s technical staff. Rabkin has actually issued a statement on the incident — somewhat of a rarity in litigious situations like these — in which he argues the hole Davis found isn’t a big deal because it would take tools and skill to exploit it.

[company name redacted] does not claim, and never has, that a door protected by one of its products is impregnable. It is simply common sense that anyone with the time, sophistication and resources to engage in IOActive’s methodology could more simply defeat a [company name redacted] product by drilling the lock off the door, or for that matter chopping the door down with an axe. To suggest, as your report does, that [company name redacted]’s products suffer from “severe” vulnerabilities simply because you were able to develop a bypass in your lab ignores the fact that the exploit in question was not possible without the use of costly and sophisticated lab equipment and highly skiled technicians—not exactly a real-world scenario for the intended use of [company name redacted] products.

While there’s a certain amount of truth to his assertions (faster, less-work-intensive “workarounds” will always be preferred by the majority of criminals), it’s not exactly as impossible as Rabkin makes it appear. While most criminals will not have access to lab technicians and equipment, some will. And the fact that these are being used to secure sensitive targets means the flaw is far more likely to draw the attention of technically-adept criminals. And the argument itself is somewhat self-defeating. If the hole is so impossible to exploit effectively, it would follow that CyberLock would have had no issue with Davis releasing his findings. The summoning of its legal representation suggests it thinks otherwise.

While CyberLock and its representation may feel exploitation of this security flaw is unlikely, that’s no excuse for handling it the way they did. Davis made several attempts to give CyberLock a chance to respond before taking the flaw public, but the company did nothing more than tell him to shut up using its Jones Day proxy.

With few exceptions (companies who participate in bug bounty programs, mostly), it’s become hazardous to your freedom and financial security to inform companies of security flaws.

Filed Under: , , , ,
Companies: cyberlock, ioactive, jones day

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Another Company Thinks The Best Way To Handle A Security Hole Is To Send A Lawyer After The Person Who Discovered It”

Subscribe: RSS Leave a comment
38 Comments
That One Guy (profile) says:

Re: Re:

Pretty much.

With so many companies playing ‘shoot the messenger’ regarding anyone stupid enough to try and privately inform them of found vulnerabilities, while quietly informing a company of a found vulnerability may be the more ‘polite’ and ‘responsible’ way to handle it, doing so involves taking a huge risk, one you’d have to be pretty stupid to take.

As such at this point not contacting a company first is the smarter thing to do, go public with your findings(anonymously of course), and let them deal with the fallout that they brought on themselves, first by failing to spot the problem in the first place, and second by caring more about threatening those who would help them, rather than thanking them for doing so.

That One Guy (profile) says:

Re: Re: Re: Re:

Yeah, at this point it would/should be ‘walk away’, ‘A bag of what now?’ if asked about it.

You’d have to be pretty stupid to call something like that in, as which do you think the police would prefer to do, go through all the trouble of trying to find the actual dealer/user, something they may or may not be able to manage, and either way is going to take work, or go after the schmuck who reported it as the ‘most likely suspect’, piling on charges until they cave and take a plea deal?

John Fenderson (profile) says:

Re: Re:

“I’d laugh if one of these days, someone just said the hell with it just posting these flaws…”

In the old days, that’s exactly what security researchers did. They just published their results right up front. Companies complained, arguing (with merit, in my opinion) that it would be better for everyone’s security if they had some advance warning so they could have a fix ready when public disclosure happened.

It would be a true shame if the misbehavior of companies caused security researchers to go back to the old ways.

Mason Wheeler (profile) says:

Rabkin has actually issued a statement on the incident — somewhat of a rarity in litigious situations like these — in which he argues the hole Davis found isn’t a big deal because it would take tools and skill to exploit it.

This guy doesn’t understand the exploitation of electronic vulnerabilities. It’s a common enough misunderstanding; not getting it is the primary reason why DRM continues to be used today.

Here’s the part he doesn’t get: Yes, it takes a lot of tools and skill to figure out how to exploit it. But once one person with the tools and skill does all that hard work and publishes his results, it then becomes trivial for people with a much lesser degree of tools and skill to reproduce that work and do the same thing. Cracked once is cracked everywhere, forever.

Anonymous Coward says:

Mason Wheeler (profile), May 7th, 2015 @ 10:53am
[….]

But once one person with the tools and skill does all that hard work and publishes his results, it then becomes trivial for people with a much lesser degree of tools and skill to reproduce that work and do the same thing. Cracked once is cracked everywhere, forever.

But that’s exactly the (misguided)point these dummy lawyers are making: ‘Lesser skilled/equipped persons couldn’t exploit the vulnerability if you didn’t tell them about it. You’re abetting criminal activity simply by publishing the flaw.’

That Anonymous Coward (profile) says:

Re: They aren't worried about criminals finding this out.

correction they are worried that their customers legal representation will find out and begin litigation against them using the fact they ignored a known exploit of their system.

Of course the lock company could just be taking a page from that hotel lock system who built a flawed product then demanded their customers pay to fix the problem.

One expects there might be some secure locations looking for a new lock vendor as they discovered the old supplier actively tried to hide flaws in their system rather than work to fix them.

Rich Kulawiec (profile) says:

Disclosure policies

But that’s exactly the (misguided)point these dummy lawyers are making: ‘Lesser skilled/equipped persons couldn’t exploit the vulnerability if you didn’t tell them about it.

This is also what fans of “responsible disclosure” might say. However, this presumes that only one person/organization/company is aware of the flaw…and that’s an incredibly naive assumption. It’s naive first because what one person can find, another can find. And it’s naive second because we know that there are individuals, organizations, companies, and governments spending a ridiculous amount of time looking for exactly these kinds of security problems. We also know that a great many of them won’t share their finding with the vendor OR with the public: they’ll keep them against the day when they’d like to exploit them.

Add to this mix the litany of vendor tactics: denial, intimidation, censorship, blame, accusations, DMCA invocation, more denial, evasion, stonewalling, still more denial…and it becomes clear that even those who buy into “responsible disclosure” and want to practice face one heck of an uphill battle. As we see here.

And in the recent case of a XSS vulnerability in WordPress, as explained here: http://klikki.fi/adv/wordpress2.html

Quoting from that page:

WordPress has refused all communication attempts about our ongoing security vulnerability cases since November 2014. We have tried to reach them by email, via the national authority (CERT-FI), and via HackerOne. No answer of any kind has been received since November 20, 2014. According to our knowledge, their security response team have also refused to respond to the Finnish communications regulatory authority who has tried to coordinate resolving the issues we have reported, and to staff of HackerOne, which has tried to clarify the status our open bug tickets.

Vendors who are the beneficiaries of the largesse of security researchers — who are, after all, merely pointing out instances where the vendors failed to secure their products — should beware of antagonizing them. After all, next time the researchers might just decide to sell the exploit and quietly pocket the profits without bothering to try to communicate with a vendor that would prefer to threaten rather than listen. That’s not good for the vendors…or their customers.

That One Guy (profile) says:

Re: Re:

They don’t bother thinking that far ahead. As far as they’re concerned, if they manage to silence anyone stupid enough to try and get in touch with them first to solve the issue without making it public, that’s it, problem solved.

That their zealous ‘shoot the messenger’ behavior will instead all but force security researchers, or others who find vulnerabilities to post them publicly and anonymously, rather than privately, apparently never crosses their thick skulls.

Bergman (profile) says:

Re: Re: Re:

To an engineer, someone who discovers a problem before it bites them in the ass is a hero — because it lets them prevent trouble.

To a bureaucrat, the guy who discovered the problem created it — it didn’t exist before then. And nobody likes a troublemaker.

Report a problem to an engineer and you get heart-felt thanks. Report the same problem to a bureaucrat and get an FBI SWAT team kicking down your door.

Anonymous Coward says:

Specialized equipment?

Let me tell you what special equipment one probably only needs for breaking this system after an exploit has been revealed: 1 multi-interface 4-8 core computer and one person who kind of knows what he is doing in order to create a way to fit the pins right to the interface in the lock.
The first we all carry in our pocket and the second is not as hard to come by as this guy might think.

Anonymous Coward says:

There's something missing here.

It’s easy to make a lawyer look like an asshat – just write to them and wait for the reply. So where are these letters that were sent TO the lawyer.
Reading between the lines on the first response IOactive were withholding the information, whilst threatening to go public. Why were they withholding? Because they wanted to get paid? Isn’t that extortion?

That One Guy (profile) says:

Re: Read article, then comment

But after several failed attempts over the last month to disclose the findings to CyberLock and its parent company Videx, they received a letter from CyberLock’s outside law firm, Jones Day, on April 29, a day before they planned to publicly publish their findings.

They were withholding the information because they were hoping that the company in question would get back to them and tell them that they were looking into the problem that the researchers had found.

When all they got was dead silence, and it looked like the company was just going to ignore the problem, only then did they mean to go public and force the company to admit that there was a problem that needed to be addressed.

It had nothing to do with ‘extortion’, and was instead basic courtesy, followed by forcing the issue the only way they could.

Anonymous Coward says:

Re: Re: Read article, then comment

And they’re not after a payday? Why force the issue?

But seriously, these stories are always missing the correspondence from the researchers. What was said in ‘several’ letters that made the company want to lawyer-up? The real story is in those letters.

There was definitely a threat of public disclosure on the 30th of April. Sorry, the “basic courtesy” of a looming deadline.

Anonymous Hero :P says:

Re: Re: Re: Read article, then comment

And they’re not after a payday?

So what if they want to get paid, after all they probably spent a considerable of time to find this flaw.

Why force the issue?

So that people who use this lock are not vulnerable? Ever think of them?

But seriously, these stories are always missing the correspondence from the researchers. What was said in ‘several’ letters that made the company want to lawyer-up? The real story is in those letters.

If those letter contained any threats, don’t you think the letter from the lawer would contain the appropriate response to it? Eg: if the researchers were extorting them, you think that the letter from the law firm is not going to mention umpteen laws about how its illegal and that they might call in the feds/whoever on them?

There was definitely a threat of public disclosure on the 30th of April. Sorry, the “basic courtesy” of a looming deadline.

And you think nicely asking this security company is going to get them to do anything? Get your head back in the real world. Even google gives only 90 days to respond to the vulnerabilities they found, and they go public after that. Because if they didn’t then the company would not have an incentive to fix their products/reputation.

Anonymous Coward says:

Re: Re: Re:2 Read article, then comment

Is there some nuance I’m missing between “There wouldn’t be threats” and “Asking nicely isn’t going to get very far”?

I’d just, in the interests of full disclosure, like it if when these guy’s go crying to the media, that they tabled all the documents and not just the lawyer letter they got. It’s somewhat disingenuous, and it leaves a large part of the white-hat story untold – a story that is supposed to be all about disclosure.

That One Guy (profile) says:

Re: Re: Re: Read article, then comment

And they’re not after a payday? Why force the issue?

The systems are used in metro stations in Amsterdam and Cleveland, in water treatment facilities in Seattle and Atlanta, Georgia and at the Temple Terrace Police Department in Florida, among other places. The company’s marketing literature also promotes use of the locks in data centers and airports.

The above is why it’s important to force the issue if the company is going to try and ignore the problem. These aren’t just locks on people’s houses, they’re being used to secure various government and public buildings, and if a company is going to try and promote them for use as such, then they better be as secure as they’re saying.

If they’re not, as is apparently the case, and the company refuses to acknowledge the flaw and fix it on their own, the public and government deserves to know about it so they can make informed decisions, such as switching to a company that cares more about the security of their product, rather than protecting their reputation at the cost of product security.

But seriously, these stories are always missing the correspondence from the researchers. What was said in ‘several’ letters that made the company want to lawyer-up? The real story is in those letters.

Most likely? Several variations of ‘Hey, we found a security flaw, it’s pretty bad, you might want to fix it’ a couple of times, before the researchers realized the company was more interested in acting as though nothing was wrong than admitting to a problem, and they decided to force the issue by telling the company if they had no interest in fixing the problem, then they’d give them some incentive to care by going public with their findings.

As for why they decided to lawyer up, that was most likely due to the realization that they couldn’t just pretend that nothing was happening, and they could either a) admit that there was a problem with their products, and spend time and money fixing the problem, or b) try and silence the ones who pointed out the problem, and hope no-one else would find it. Guess which one they went with?

Anonymous Coward says:

[Criminals] could more simply defeat a [company name redacted] product by drilling the lock off the door, or for that matter chopping the door down with an axe.

Wait… A lawyer representing a lock company made a legal threat based on the idea that all locks are useless? CyberLock is basically saying “don’t avoid buying our products because they’re flawed, avoid buying them because they serve no purpose whatsoever.”

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...