Tempes Fugit's Techdirt Profile

Tempes Fugit

About Tempes Fugit

Tempes Fugit's Comments comment rss

  • Mar 29, 2018 @ 02:14am

    Everything Oracle touches dies

    You've already covered the legal issue thoroughly, and I've already long since weighed in on the folly of this entire case, so I'll just note that Oracle is now nothing more than a parasite -- like what remains of SCO -- leeching off people and companies that do real work.

  • Mar 28, 2018 @ 10:49am

    Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Last I checked in Statistics class a sampling of one is useless."

    Two responses to that.

    First, if we accept that statement, then it is useless in support of the claim "driverless cars are more safe" and equally useless in support of the claim "driverless cars are less safe".

    Second, that's why I suggested approaches that (a) normalize and (b) use many more data points. If -- and I'm fabricating these numbers to illustrate -- driverless cars have been on the road for 4000 hours in Phoenix, then we have substantially more than one data point about them. Of course while that was happening, human-driven cars might have been on the road for 315,000 hours, so we still have the problem posed by the enormous disparity in the raw numbers. But at least we're past the problem of a singular data point.

    What we need to better understand this are the real numbers for both human-driven and driverless cars. I'm working on the former at the moment.

  • Mar 28, 2018 @ 10:07am

    Re: Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Again, I'd rather get some valid data rather than try to randomly generate figures that will by nature be both fictional and skewed toward whatever the person guessing wants to prove."

    Do note by using the theoretical maximum that I suggested that this stacks the deck against my point. I did so deliberately to avoid skewing the numbers of favor of my argument.

    "I'm yet to hear a valid reason, apart from "I don't trust them""

    I've provided some in previous commentary here, and I've referenced others. I'm overdue to write a long-form piece laying out some of them -- and there are plenty. One of my principle concerns is that driverless vehicles aren't special -- they're just another thing in the IoT, and the entire IoT is an enormous, raging dumpster fire of security and privacy failures. There are ZERO reasons to think that cars will be any better than toasters, and a lot of reasons to think that they'll be worse.

    I'll publish it when I have the time so that the arguments are laid out more clearly for analysis/critique. If you want to see a draft version, drop me an email and I'll send you what I have so far.

  • Mar 28, 2018 @ 05:26am

    Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "That's a reasonable metric, but the problem is [...]"

    Agreed. It would probably be better to use statistics at the national level in order to better represent all driverless vehicles, but that still leaves the problem of the massive difference in scale between the two sets of statistics.

    "I'd rather see some real figures [...]"

    I'm working on getting those. I'm curious to see what they are as well. Of course, for a fair comparison, we'd also need figures on operator-hour and vehicle-miles for the driverless vehicles too. However, because there aren't many, we could deliberately overestimate those (e.g. 168 hours/week/vehicle, which is the theoretical maximum) and then see what those calculations tell us.

    "I also somehow doubt you'd have been so concerned about the death rate before this one happened, since that would have argued the opposite point for you."

    I've been arguing against driverless vehicles for a long time. I commented here on this specific point because the citation of the death rate is being used to suggest that driverless vehicles are safer. Personally, I think it would be better to compare all accidents (that is, fatal and non-fatal, pedestrian and non-pedestrian) in order to use larger data sets and perhaps gain better insight. But it should be clear to everyone that using raw numbers without normalization is just wrong.

  • Mar 28, 2018 @ 03:58am

    This is an apples-to-oranges comparison, and it's wrong

    "There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone [...]"

    These raw numbers mean nothing. If you want to compare fatality rates, then use either "pedestrian fatalities per operator hours" or "pedestrian fatalities per vehicle miles".

    Using operator hours as a metric normalizes over the accumulated time an ensemble of vehicles was in use. Using vehicle miles normalizes over the accumulated distance that an ensemble of vehicles travels. Of course, the more hours that vehicles are in use and the more miles that they travel, the more opportunity they have to be involved in accidents, including but not limited to pedestrian fatalities. [1]

    To provide a hypothetical example, if 5000 vehicles were operated for exactly 1 hour each, that's 5000 operator-hours and if there were 10 pedestrian fatalities associated with those 5000 vehicles, then the that's a rate of .002 fatalities/operator-hour. (Similarly for vehicle miles.) And if -- during the same time period -- 1 self-driving vehicle was operated for 10 hours with 1 associated pedestrian fatality, then that's a rate of .1. Which is 50 times higher.

    The real numbers are far more skewed than this example, of course; Phoenix has a population of roughly 1.5M. If only 10% of those people drive only 20 minutes (during the time period in question), that's 45,000 operator-hours. And as large as that is, it's still too low to be realistic: consider all the vehicles that are operated all day longs (cabs, buses, trucks, delivery cars/vans/trucks, police cars, etc.) and consider the impact of twice-a-day commuting on the aggregate total. I wouldn't be surprised at all if the normalized pedestrian fatality rate per operator-hour for human-driven vehicles is a ten-thousandth or less or much less of that for driverless vehicles. (And the same goes for vehicle-miles, although obviously the numbers would be calculated differently.)

    Feel free to use your own back-of-the-envelope estimates for these. AAA has published the figure of 17,600 minutes/year as an estimate for all drivers; that's 338 minutes/week or 5.6 hours/week -- a lot higher than the 20 minutes I used above. The Car Insurance Institute estimates about 13,500 miles/year per driver. or about 260 miles/week. Obviously these vary by state and city, but I'm sure the actuaries who do this for a living have solid estimates for Phoenix. However you do the calculation, you'll find that in the Phoenix area, the pedestrian fatality rate for driverless vehicles is several to many orders of magnitude higher than that for human-driven ones.

    [1] Obviously the kind of accidents they're likely to be involved in varies with where the vehicles are. Pedestrian fatalities are more likely to happen on streets and less likely to happen on highways. On the other hand, high-speed collisions are more likely to happen on highways and less likely to happen in urban centers. However, calculating this based on an ensemble of vehicles which encompasses the entire area (downtown, city, suburbs, exurbs, etc.) and over a sufficient period of time (much more than a single day, in order to account for commuting/non-commuting days) smooths out the variations enough to yield useful metrics that are applicable to the entire region.

  • Mar 27, 2018 @ 08:52am

    "Security analysts like Bruce Schneier have been warning for a while that the check is about to come due for this mammoth dumpster fire, potentially resulting in human fatalities at scale -- especially if these flaws are allowed to impact integral infrastructure systems."

    Like driverless vehicles. The cheerleaders for these like to pretend that they're exempt from the dumpster fire, but in fact they may be the worst part of it.

    Last week's Uber incident was only the beginning.

  • Mar 22, 2018 @ 04:05pm

    Re: Re: I've been telling you this for over a decade

    "If you want your own controlled walled garden on the Internet, buy your own servers and select the users you allow to use it."

    If I wanted to do that, I wouldn't have spent several decades trying to do the opposite.

    "It is obvious that social media sites will be gamed, by the same people who run botnets and spam."

    No. It it obvious that people will TRY to do that (and much more, of course). There is no reason for social media sites to fall for it, not when techniques to defend against it are well-known, well-understood, and readily available. It's just not that hard and it's an expected, baseline level of competence in the field.

    The only reason any operation becomes overrun by this nonsense is that it's chosen to. It's chosen to be cheap, or negligent, or incompetent, or it frankly just doesn't care.

  • Mar 22, 2018 @ 12:37pm

    Re: Re: I've been telling you this for over a decade

    That's a great question. And unfortunately, I think the answer is "none". In a better world, they'd want to change on their own, that is, not because they were called out but because they wanted to improve. But this is not that world.

  • Mar 22, 2018 @ 11:50am

    I've been telling you this for over a decade

    "To say it is like a bull in a china shop would be unfair to bulls, who at least seem to have some awareness of the chaos they leave in their wake as they throw their weight around. Whereas Facebook seems to have little insight into just what it is that it does, where it lives in the Internet ecosystem, and who is in there with it."

    Facebook is chiefed by a sociopath and operated by ignorant newbies who haven't got the slightest idea how to professionally manage a large (or even a medium-size) operation connected to the Internet. Nobody there seems to have mastered Internet Operations 101. Nobody there seems to have learned anything from prior successes and failures -- ESPECIALLY failures. Nobody there seems to grasp that being on the Internet is a privilege, not a right, and that responsible exercise of that privilege requires due diligence. Nobody there, and this restates your point, seems to have the slightest idea of the enormous amount of damage they've done and are doing.

    Facebook is, top-to-bottom, a failure. It didn't have to be, but it chose to be from the very beginning and it's stubbornly refused to admit and fix its mistakes.

    (By the way: did you read Zuckerberg's formal statement? Did you notice what was missing? The words "sorry", "apologize", and "regret".)

    If this failure was solely confined to Facebook, then it wouldn't matter much. But as you observe, the consequences are going to reach far beyond that and are going to impact a lot of operations who DID do things right (or mostly right), operations that are far more important than Facebook, operations that were here before it existed and will still be here when it's gone. And that's the saddest part of this: a lot of good people who worked hard are going to have to pay for mistakes they never made.

  • Mar 19, 2018 @ 04:48pm

    Re: Re:

    "They can't be hacked remotely."

    1. or so the vendors claim
    2. that you know of
    3. today
    4. and they don't need to be

    There's a lot more to be said on this, but for the moment: a person is dead, and that's a tragedy. The debate can wait.

  • Mar 19, 2018 @ 11:47am

    Accountability and canaries

    "The most Facebook that did wrong was not monitoring the professor close enough to make sure he was abiding by the terms of the agreement."

    1. It's not clear -- as least not yet -- that they monitored him AT ALL, which is incredibly irresponsible. Referring back to what I said upthread about academic research (and once again, I'm not familiar with UK law) entities providing data to academic researchers are expected to keep an eye on what they're doing. That might mean asking for reports, or conducting audits, or other things, but the gist is that you can't just hand over the data and wash your hands of responsibility.

    2. What Facebook should have done is put canaries in place: synthetic profiles that are positioned so that they'll be picked up by this researcher and ONLY by this researcher. If that data turns up elsewhere, or if there are indications that the data is being used elsewhere, that proves there's a data path from Facebook through this particular researcher to someone else -- at which point some serious questions need to be asked.

  • Mar 19, 2018 @ 07:22am

    A couple of early observations

    1. "The Intercept had reported something similar a year ago, though it only said it was 30 million Facebook users, rather than 50 million."

    I think it's probably prudent to think of 50M as a floor, not a ceiling. These incidents are ALWAYS worse. See: Experian. See: Yahoo. See: pretty much any dataloss/breach/security incident over the past couple of decades.

    2. "Cambridge Analytica got its data by having a Cambridge academic (who the new Guardian story revealed for the first time is also appointed to a position at St. Petersburg University) set up an app that was used to collect much of this data, and misled Facebook by telling them it was purely for academic purposes[...]"

    This is blatant academic misconduct. In the US (can't speak to UK law) all research involving human subjects - which includes their data - has to go through a vetting process which includes details on what data will be involved, what will be done with it, what the research objectives are, how the data will be protected, how the data will be retained/destroyed, etc. This includes an IRB (Institutional Review Board) which includes insiders (such as people at the same institution) and outsiders (people who aren't) and has veto authority. If you tell the IRB "we're going to study there/they're/their conflation among millenials" and you instead use the data to study their choices in smartphones, it's not going to go well for you when the IRB finds out. And this present case is clearly much, much worse.

    3. "Facebook doesn't sell your data. It sells access to its users via the data it has on you."

    And it has gone to enormous lengths to acquire, store, and analyze that data. That's why its market valuation is upstairs of $200B. Facebook acquires every scrap of data that it can about everyone and everything, and subjects it to excruciating analysis: that's its entire reason for existing, the "social" features are just wallpaper over the important machinery.

    Given that singular focus, I find it VERY hard to believe that anyone or anything accessed data on 50M people and wasn't noticed. That should have left a trail a mile wide in the logs, easily noticed with even perfunctory analysis.

    So either they weren't monitoring what their own operation was doing -- which would be a stunning level of incompetence and negligence -- or they knew about this all along.

  • Mar 16, 2018 @ 04:11am

    Re: Re: New Bread Box Need, New Engineers Requested

    Yes, it would be lovely to start over and apply all the lessons we've learned. However, precisely zero of the people proposing that course of action have been able to put forth a workable plan for migrating the entire Internet.

    Email has its problems, to be sure. I've spent decades documenting and working on them, so I think it's fair to say I have an extensive awareness of them. But for all that, it's still the "killer app", and the communications method of choice for clueful people.

  • Mar 16, 2018 @ 04:14am

    They're not building a communications tool

    They're building a target. It will be fully compromised before it goes live.

  • Mar 13, 2018 @ 02:10pm

    No, not really

    "The issue is that companies are inevitably going to be bad at this."

    They'll only be bad at it if they really want to be -- that is, if they refuse to learn anything from all the successes and failures (especially the latter) that preceded them.

    This isn't new. It's just another version of a problem that's repeatedly surfaced over decades, which is why there are now a lot of well-known approaches to dealing with it. Of course every version of this problem has its own unique characteristics, and thus not every approach will work -- but some of them will. All the people at Twitter have to do is pay attention to history.

    I hope they are. But gaffs like this strongly suggest to me that they're not.

  • Mar 05, 2018 @ 01:54pm

    Re: Re: And yet...

    This analysis is fine, as far it goes. The problem is that it doesn't go far enough. Let me comment on a couple of points:

    "That's not an externality. That's not some third party Carol being harmed without car buyer Bob's knowledge. That's Bob being harmed. The buyer."

    And everyone else in the car with Bob. And everyone else that Bob's car hits. And everyone in all the other models of this car with all the other Bobs and everyone in all the other cars those hit. And all the pedestrians and everyone else.

    If you're going to tell me that this kind of class breach is impossible in driverless vehicles even though we've seen it in myriad other IoT devices (and servers) (and routers) (and laptops) (and CPUs) (and smartphones) (and SCADA) (and pretty much every other computing device), then that's an extraordinary claim. Where is the extraordinary proof?

    "But your suggestion that the current security problems in IoT devices presage comparable security problems in driverless cars is a false comparison."

    It's actually a very easy and obvious comparison, because the people working on these vehicles are very intent on repeating the failures of the rest of the IoT ecosystem. They're working hard on it. They're spending money and time on it. And you know, we can already see some of the signs that they're succeeding:

    https://arstechnica.com/cars/2018/02/no-one-has-a-clue-whats-happening-with-their-connected-cars-data/

    Think about the implications of that. Put the privacy issues aside for a moment and think about what it tells you about the design decisions being made. And think about what it does to the overall system security posture -- which is more than just the vehicles.

    (non-sequitur) I should probably write this up at length, because trying to articulate a complex argument in little snippets really doesn't work that well. But in the interim let me refer you to this excellent piece by Zach Aysan:

    https://www.zachaysan.com/writing/2018-01-17-self-crashing-cars

    He has a slightly different take on it than I do, but I think (not speaking for him) we're roughly on the same page.

    I also have a suggestion -- AFTER you read that piece.

    Sit down and make a back-of-the-envelope estimate of the available attacker budget (a la http://www.schneier.com/crypto-gram-0404.html#4). Then keep in mind the massive asymmetry between attackers and defenders -- that is, we routinely see attackers with only a tiny fraction of defenders' budgets succeed, but succeed massively. (See: 9/11/2001.) So based on the available attacker budget, pick a multiplicative factor that suits you - 100X, 500X, 617X, whatever -- and calculate the defender budget necessary to have a reasonable chance of thwarting attacks.

    It will be a big number. (If it's not, you did it wrong.)

    Nobody is spending that on vehicle security.

    You know what they're spending money on? They're spending it on things that make the vehicles LESS secure. Scroll back up and read the Ars Technica piece again.

    I'll try to write something longer that lays out the argument better. But in the meantime, feel free to stick a pin in this, come back in ten years and tell me I was wrong. I'd be happy to be.

  • Mar 05, 2018 @ 10:35am

    Re: Re: And yet...

    "Please defend this statement."

    Have you not been paying attention?

    There are insecure TVs.
    There are insecure fitness watches.
    There are insecure car washes.
    There are insecure "smart" locks.
    There are insecure pacemakers.
    There are insecure sex toys.
    There are insecure speed cameras.
    There are insecure vacuum cleaners.
    There are insecure toys.
    There are insecure safes.
    There are insecure toasters.

    And yet, somehow, amazingly, incredibly, the same group of people who are responsible for all of these are going to avoid making the same set of mistakes with vehicles -- because gosh, driverless vehicles are magically different, and so what has failed everywhere else with entirely predictable and depressing monotony is going to succeed here.

    To borrow a line from Theo de Raadt, anyone who thinks that will happen is deluded, if not stupid.

  • Mar 05, 2018 @ 07:42am

    And yet...

    ...there are ignorant newbies who think driverless vehicles are a good idea -- conveniently ignoring the fact that they're just another thing in the IoT. They're not magically exempt from this dumpster fire.

  • Mar 02, 2018 @ 11:52pm

    You've got to be kidding

    Driverless vehicles are just another thing, as in IoT. And just like the rest of the IoT, they are an absolute horror story of security and privacy problems. It's not a question of if they'll be hacked en masse with horrific consequences, it's only a question of when.

  • Feb 27, 2018 @ 03:54am

    Re: Re: Re: Re: You're absolutely right: they weren't

    "What you keep railing about is that those companies do not control who can use their platforms."

    Why don't they?

    That's what responsible professionals do.

More comments from Tempes Fugit >>