Wireless Carriers, Hardware Companies Use Flimsy IOT Security To Justify Attacks On Right To Repair Laws

from the lobbyists-within-lobbyists dept

A few years ago, anger at John Deere’s draconian tractor DRM birthed a grassroots tech movement. The company’s lockdown on “unauthorized repairs” turned countless ordinary citizens into technology policy activists, after DRM and the company’s EULA prohibited the lion-share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for “authorized” repair, or toying around with pirated firmware just to ensure the products they owned actually worked.

The John Deere fiasco resulted in the push for a new “right to repair” law in Nebraska. This push then quickly spread to multiple other states, driven in part by consumer repair monopolization efforts by other companies including Apple, Sony and Microsoft. Lobbyists for these companies quickly got to work trying to claim that by allowing consumers to repair products they own (or take them to third-party repair shops) they were endangering public safety. Apple went so far as to argue that if Nebraska passed such a law, it would become a dangerous “mecca for hackers” and other ne’er do wells.

Wary of public backlash, many of these companies refuse to speak on the record regarding their attacks on consumer rights and repair competition. But they continue to lobby intensely behind the scenes all the same. The latest example comes courtesy of the “The Security Innovation Center,” a new lobbying and policy vehicle backed by hardware vendors and wireless carriers. The group issued a new “study” this week that tries to use the understandable concerns over flimsy IOT security to fuel their attacks on right to repair laws.

The study starts out innocuously enough, noting how they hired Zogby to run a poll of 1015 users on consumer privacy and security concerns in the internet of broken things era:

“Almost two-in-three American consumers said that the explosive growth of Internet-connected products makes them more concerned about their privacy and security, the survey of 1,015 Americans found. And only 1 in 3 Americans expressed confidence that people they know would not be affected if one of their devices was hacked.”

Which is understandable. Especially in an era where countless IOT companies value gee whizzery over privacy or security. But it doesn’t take long for the real purpose of this study to reveal itself–demonizing efforts to break the monopoly over repair:

“These concerns have placed a focus on security when getting Internet-enabled products fixed: 84 percent value the security of their data over convenience/speed of service….More than 80 percent expect repair professionals to both provide a warranty for their repairs and demonstrate that they are trained or certified to fix their specific product. Further, 75 percent value warranty protections over convenience and 70 percent feel most comfortable having their products fixed by a manufacturer or authorized repair shop. Yet, only 18 percent can determine if an electronics repair shop is protecting their security and privacy.”

In other words, the not so subtle message being sent by hardware vendors and wireless carriers is this: don’t allow third-party or user self repair because you’ll wind up hacked, or worse. That matches the same message being sent by Verizon, Apple, Microsoft and others as they try to convince the public that being able to access less expensive third-party repair vendors (or repair your own devices yourself) will result in reduced security and privacy, dogs and cats living together and the world being ripped off its axis.

Again, the “Security Innovation Center” isn’t much of a center at all. It’s basically just a lobbying and policy vessel created by a New York PR firm (Vrge), backed by other, existing lobbying and policy vessels (CTIA, CompTIA, NetChoice). It’s such a thin veneer, the Center’s press release lists its “executive director” as Josh Zecher, the guy who founded the PR outfit running the campaign. It’s basically just the Russian nesting doll equivalent of lobbying and policy, all to obfuscate Apple’s, Verizon’s and other companies’ blatant disdain for repair competition and consumer rights.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Wireless Carriers, Hardware Companies Use Flimsy IOT Security To Justify Attacks On Right To Repair Laws”

Subscribe: RSS Leave a comment
48 Comments
Ninja (profile) says:

Hmm, if anything these security problems are the sole responsibility of the industry itself to the point that only custom made software or DIY hardware changes will ever fix those problems because said manufacturers couldn’t care less. And I’d further argue that software and hardware should be completely free for whoever wants to pop it open and scrutinize it because we can’t trust the industry to do their job properly.

Careful with this piece of argument dear companies. I hope somebody uses it to show the industry can’t be trusted with repairs and patches by itself and thus we need strong rights to repair and no circumvention restrictions of drm and whatsoever.

Anonymous Coward says:

Lets assume I own a very large tractor.

Why would I own such?

Most like it is to do productive work.

If my tractor does not work and I can not repair it, I loose money.

If I own a tractor that cost me money why would I own it?

There is a local park which has a train steam engine in it. The engine was order from the factory in 1948 and delivered in 1952 and retired in 1954.

Why would a railroad retire and give away a major piece of equipment that should have a service life of 20 years after 2 years? Would the 1952 coal strike have something to do with this? No coal equals no run equals useless useless. The railroad replaced the steam engine with a diesel electric because of coal cost and coal unavailability. There was simply no coal to be had.

Why would one replace a new John Deere tractor with an older model or a foreign import. If you can not fix it it is useless. Just a big mound of useless scrap iron. How many tractors that can not be repaired will John Deere then sell before John Deere goes bankrupt? How many steam engines did Baldwin sell after the coal strike? None.

Anonymous Anonymous Coward (profile) says:

Who built in the risk?

Isn’t the bigger risk in the way IoT companies collect and share information, often on the sly? Fixing something and making a mistake wouldn’t cause more sharing, but it might make a device more vulnerable to hackers. Of course, if the IoT companies didn’t set their model up to leak private information to themselves, those same hackers would not have the same paths to take advantage of.

Anonymous Coward says:

Re: Re: And yet...

What does that have to do with the fact that driverless vehicles will be the target of attacks and neither the manufacturer nor government seems to be doing anything about stopping it. I imagine their concern might ratchet up a bit after they are inconvenienced by these rouge pirate vessels.

OA (profile) says:

Re: And yet...

…there are ignorant newbies who think driverless vehicles are a good idea…

I don’t understand the need for an insult, here… I like the idea of a driverless car (I am biased though: I sometimes don’t like driving) so as long as I am ultimately in control.
Your comment is worthless, as is, but I suspect that in the existing climate, driverless cars will be executed in an irresponsible manner and be hostile to consumers in some way(s).

conveniently ignoring the fact that they’re just another thing in the IoT. They’re not magically exempt from this dumpster fire.

Please defend this statement.

Rich Kulawiec (profile) says:

Re: Re: And yet...

“Please defend this statement.”

Have you not been paying attention?

There are insecure TVs.
There are insecure fitness watches.
There are insecure car washes.
There are insecure “smart” locks.
There are insecure pacemakers.
There are insecure sex toys.
There are insecure speed cameras.
There are insecure vacuum cleaners.
There are insecure toys.
There are insecure safes.
There are insecure toasters.

And yet, somehow, amazingly, incredibly, the same group of people who are responsible for all of these are going to avoid making the same set of mistakes with vehicles — because gosh, driverless vehicles are magically different, and so what has failed everywhere else with entirely predictable and depressing monotony is going to succeed here.

To borrow a line from Theo de Raadt, anyone who thinks that will happen is deluded, if not stupid.

Thad (user link) says:

Re: Re: Re: And yet...

the same group of people who are responsible for all of these

I wasn’t aware that Uber made insecure sex toys and Google made insecure vacuum cleaners. Do you have a citation for that?

because gosh, driverless vehicles are magically different

Nothing magic about it. Cars are different because they are far more expensive to produce than any of the objects you’ve mentioned, and because security vulnerabilities in cars are potentially fatal.

If someone compromises your TV, they can use it to spy on what you’re watching.

If someone compromises your self-driving car, they can use it to kill you.

If you don’t see the difference in risk assessment between those two outcomes, then you’re the one who’s deluded or stupid.

Anonymous Coward says:

Re: Re: Re:2 And yet...

“I wasn’t aware that Uber made insecure sex toys and Google made insecure vacuum cleaners. Do you have a citation for that?”

I said “same group”. And I stand by that characterization.

“Cars are different because they are far more expensive to produce than any of the objects you’ve mentioned, and because security vulnerabilities in cars are potentially fatal.”

True (as far as it goes) but neither of these make them magically immune to the same kinds of problems that permeate the entire IoT ecosystem. (They’re also far more complex than most of the things I listed, which means of course that they’re far more susceptible.)

“f someone compromises your self-driving car, they can use it to kill you.”

Yes, but: that’s a naive, best-case scenario: it’s an extraordinarily optimistic outcome that is unsupported by our collective experience with the IoT to date.

When, not if, someone compromises a particular make/model of self-driving car, then — just as we’ve seen with other IoT devices — the most likely outcome will be that they have ALL of that particular make/model, i.e., a class breach.

Think about the consequences of that. Then talk to me about risk assessment.

OA (profile) says:

Re: Re: Re: And yet...

Could you not express everything in rant form?

You claim driver-less cars are like other internet of broken things. EXPLAIN! As in why do you say that? In non-self-evident terms.

Will the cars require the internet to function? Can they be controlled by remote? Can they be remotely shut down by force?… IOT is not the same as yelling BOO! I want a driver-less car, perhaps I shouldn’t.

Thad (user link) says:

Re: And yet...

While there are certainly major concerns about the security of driverless cars, you’re conflating things that, despite cursory similarities, are actually significantly different.

IoT devices have poor security because they are cheaply made by dodgy companies, and because the damage they do is typically external: if Alice sells Bob an IoT toaster, and Mallory pwns it and uses it as part of a DDoS attack against Carol, then Alice and Bob are unaffected. Bob probably doesn’t even know that his toaster is part of a botnet; as far as he knows, his IoT toaster is working fine, and he has no disincentive to buy from Alice (or other cheap, dodgy vendors) in the future.

The first distinction between that scenario and driverless cars is, driverless cars aren’t made cheaply by dodgy no-name vendors. They’re prohibitively expensive, such that only established, big-name companies are able to manufacture and deploy them at scale.

The second distinction is that presumably when you mention security threats to driverless cars, you’re not talking about externalities; you’re not talking about a car merely joining a botnet, you’re talking about an attacker causing a car accident.

That’s not an externality. That’s not some third party Carol being harmed without car buyer Bob’s knowledge. That’s Bob being harmed. The buyer.

The cost/benefit analysis is completely different. If you’re a fly-by-night operation selling IoT toasters, your incentive is to minimize costs. So what if they’re not secure? Your customer probably won’t even know that.

If you’re a major tech company selling driverless cars, your incentive is to minimize accidents. Any car accident is harmful to your brand; any major security vulnerability is potentially devastating. Not only will your customers be put off your product, but there will be massive negative press coverage, and lawsuits. Look how badly VW’s cheating on emissions turned out for the company — an incident where a security vulnerability actually caused fatalities would dwarf that scandal.

Alice doesn’t have any financial incentive to make her toasters secure against third-party attackers. Google has a major financial incentive to make its cars secure against third-party attackers.

This is not to say there will never be security vulnerabilities in driverless cars. This is not to say that Google, Uber, Apple, et al shouldn’t be very, very careful. But your suggestion that the current security problems in IoT devices presage comparable security problems in driverless cars is a false comparison.

Anonymous Coward says:

Re: Re: Re:2 And yet...

The article was light on detail. Was that one mistake in an otherwise secure system, or was the whole thing awful? I lean toward the latter when I see people can cut the brakes through the cell connection.

I don’t have faith that a regulator can find every software bug, but I’m hopeful they might notice the brake controller is connected to the wireless network.

Rich Kulawiec (profile) says:

Re: Re: And yet...

This analysis is fine, as far it goes. The problem is that it doesn’t go far enough. Let me comment on a couple of points:

“That’s not an externality. That’s not some third party Carol being harmed without car buyer Bob’s knowledge. That’s Bob being harmed. The buyer.”

And everyone else in the car with Bob. And everyone else that Bob’s car hits. And everyone in all the other models of this car with all the other Bobs and everyone in all the other cars those hit. And all the pedestrians and everyone else.

If you’re going to tell me that this kind of class breach is impossible in driverless vehicles even though we’ve seen it in myriad other IoT devices (and servers) (and routers) (and laptops) (and CPUs) (and smartphones) (and SCADA) (and pretty much every other computing device), then that’s an extraordinary claim. Where is the extraordinary proof?

“But your suggestion that the current security problems in IoT devices presage comparable security problems in driverless cars is a false comparison.”

It’s actually a very easy and obvious comparison, because the people working on these vehicles are very intent on repeating the failures of the rest of the IoT ecosystem. They’re working hard on it. They’re spending money and time on it. And you know, we can already see some of the signs that they’re succeeding:

https://arstechnica.com/cars/2018/02/no-one-has-a-clue-whats-happening-with-their-connected-cars-data/

Think about the implications of that. Put the privacy issues aside for a moment and think about what it tells you about the design decisions being made. And think about what it does to the overall system security posture — which is more than just the vehicles.

(non-sequitur) I should probably write this up at length, because trying to articulate a complex argument in little snippets really doesn’t work that well. But in the interim let me refer you to this excellent piece by Zach Aysan:

https://www.zachaysan.com/writing/2018-01-17-self-crashing-cars

He has a slightly different take on it than I do, but I think (not speaking for him) we’re roughly on the same page.

I also have a suggestion — AFTER you read that piece.

Sit down and make a back-of-the-envelope estimate of the available attacker budget (a la http://www.schneier.com/crypto-gram-0404.html#4). Then keep in mind the massive asymmetry between attackers and defenders — that is, we routinely see attackers with only a tiny fraction of defenders’ budgets succeed, but succeed massively. (See: 9/11/2001.) So based on the available attacker budget, pick a multiplicative factor that suits you – 100X, 500X, 617X, whatever — and calculate the defender budget necessary to have a reasonable chance of thwarting attacks.

It will be a big number. (If it’s not, you did it wrong.)

Nobody is spending that on vehicle security.

You know what they’re spending money on? They’re spending it on things that make the vehicles LESS secure. Scroll back up and read the Ars Technica piece again.

I’ll try to write something longer that lays out the argument better. But in the meantime, feel free to stick a pin in this, come back in ten years and tell me I was wrong. I’d be happy to be.

Thad (user link) says:

Re: Re: Re: And yet...

If you’re going to tell me that this kind of class breach is impossible in driverless vehicles even though we’ve seen it in myriad other IoT devices (and servers) (and routers) (and laptops) (and CPUs) (and smartphones) (and SCADA) (and pretty much every other computing device), then that’s an extraordinary claim.

It’s also a claim I explicitly did not make. My exact words were "This is not to say there will never be security vulnerabilities in driverless cars."

You’re not off to a good start, Rich. Your very first argument, straight out the gate, is with a strawman.

It’s actually a very easy and obvious comparison, because the people working on these vehicles are very intent on repeating the failures of the rest of the IoT ecosystem. They’re working hard on it. They’re spending money and time on it. And you know, we can already see some of the signs that they’re succeeding:

https://arstechnica.com/cars/2018/02/no-one-has-a-clue-whats-happening-with-their-connect ed-cars-data/

That article is about spying on user behavior and selling the data. I find that concerning, but I don’t find it surprising; it’s Google’s entire business model. It’s also an entirely separate issue from devices being compromised by third parties.

Sit down and make a back-of-the-envelope estimate of the available attacker budget (a la http://www.schneier.com/crypto-gram-0404.html#4). Then keep in mind the massive asymmetry between attackers and defenders — that is, we routinely see attackers with only a tiny fraction of defenders’ budgets succeed, but succeed massively. (See: 9/11/2001.)

Did you just describe 9/11 as "routine"?

So based on the available attacker budget, pick a multiplicative factor that suits you – 100X, 500X, 617X, whatever

That sounds an awful lot like "pull a number out of your ass" to me.

You know what they’re spending money on? They’re spending it on things that make the vehicles LESS secure. Scroll back up and read the Ars Technica piece again.

I did. And you know what I noticed? It’s not about self-driving cars. It doesn’t mention self-driving cars at all. It’s about traditional, manually-driven cars that connect to the Internet.

And you know what? I’m with you. I wouldn’t buy one. I won’t even turn on Bluetooth in my car; if I want to listen to music from an external player, I use a mini-stereo cable.

But you just defended your thesis about security vulnerabilities in driverless cars by repeatedly referencing an article that does not mention driverless cars. It’s a related issue, but it’s not the same thing.

In much the same way that IoT toasters are not the same thing.

I share your concern that an attack from the other side of the world could compromise driverless cars and cause collisions. But your examples and comparisons are poor.

orbitalinsertion (profile) says:

Re: Re: And yet...

There is a huge difference, for sure, but your not-driverless car is insecure already.

Giant corporations with insecure products are certainly not fly-by-night, but they are at least partly dodgy. Whether or not you consider some of their devices to be “IoT” or not, they are insecure (possibly by design) and networked. Just, idk – Google, Cisco.

I realize that some of you are arguing against some extreme claims, but you are, at the same instance, making somewhat extreme assumptions yourselves.

Also, driverless cars are not yet in production. That rather tends to change things. Will these cars be magically better than current and historical automobiles?

Anonymous Coward says:

It is weird seeing CompTIA (a supposed industry certification organization,) up there with the real bad hombres at CTIA and NetChoice. And then I remember how they tried to lobby the government to revoke all CompTIA “for-life” certifications and switch them to the three year gravy train certs before the community managed to persuade them to keep all pre-2011 certs “for-life” and just stick the new guys with the gravy train certs. I wish this company (which we all pay for, due to its 501(3)c status,) would go out of business. CompTIA is seen, in the industry, as a basic cert and usually derided compared to SANS or CISSP (though, CISSP is an ass organization too.)

If CompTIA wants to fix this shit, require the companies that make IOT devices be as open and as secure as possible…if you sell your stuff, you must either support it or provide the firmware/software source to your users so they can support it. Don’t lock it down.

Anonymous Hero says:

Circular Logic?

“Almost two-in-three American consumers said that the explosive growth of Internet-connected products makes them more concerned about their privacy and security”

So we should only trust the vendors whose crappy products convinced two-third of consumers to be concerned about their privacy and security?

Anonymous Coward says:

“don’t allow third-party or user self repair because you’ll wind up hacked”.

I don’t think this is quite correct. While they are posing it in that form, the real culprit is that it prevents people from finding out the security holes they have. So instead of fixing the problem, it is allows them to force their customers heads into the sand as well.

orbitalinsertion (profile) says:

I can fully determine so often that i assume !00% of products i don’t even know about come from an OEM or software vendor whose only concern over my privacy is that they know everything possible about me and have no regard for security.

The only downside i can imagine for 3rd party repair other than a poor job or failure is that they accidentally increase the security of a device by breaking something. If i send it to them with my data on it, i am an idiot, so that vector is ignored. Sure, some repair outfits or guys may do something evil, but i don’t see how the same sort could not be working for the oh-so-trustworthy original vendor, so i find this point moot also.

Holding an invisible watermelon does not make your argument valid, The Security Innovation Center. Also, your entire cake is clearly a lie.

MikeOh Shark says:

warranties

If companies want customers to use them as a repair source, all they have to do is lengthen their warranties. Everyone wants their product repaired if it’s not going to cost them.

We need to get a politician, if their are any that haven’t been bought off yet, to add riders to every bill prohibiting right to repair that lengthen warranties to 5 years parts and labor and 10 years software.

ECA (profile) says:

dONT GIVE YOURSELF A HEADACHE

“75 percent value warranty protections over convenience and 70 percent feel most comfortable having their products fixed by a manufacturer or authorized repair shop.”

I have come to the conclusion that in the First warranty period I will TRY to destroy my products..without going OVER the basic restrictions of the product.
30 days to the Local store
90 days to the Seller/distributor..
1 tear to the manufacturer..
Or what ever adds up to THEM PAYING TO GET IT FIXED OR A NEW PRODUCT..

best company for this..HARBORFRAIGHT..they have a no nonsense warranty.. even on CHEAP PRODUCTS..

As convoluted as what they are saying is…PRIVACY HAS NO MEANING ON TOOLS.. ANd I dont need my DEVICES, EVER, sending random data to HOME BASE..
Gizmodo did this, a Connected home…and watching more then 6 devices all sending data to a Router/modem, for 1 year..it got real stupid. Even setting it up, most routers only do 4-6 devices(not said on the box)..and the AMOUNT of data (even a small home) can send with SMART devices, is HUGE..and you will NOT like CAPS.. DEVICES SENDING DATA OUT RANDOMLY, even the Coffee maker..(WHY)..Interlinking some devices (NOT EASY) to work together.. Computer and REMOTE controls..AND THINKING, that a SMART person setup the networking protocols in ANY OF THESE PRODUCTS..and nothing was DRM/PROPRIETARY.. In your dreams..

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...