Tempe Police Chief Indicates The Uber Self-Driving Car Probably Isn't At Fault In Pedestrian Death

from the human-error dept

The internet ink has barely dried on Karl’s post about an Uber self-driving vehicle striking and killing a pedestrian in Arizona, and we already have an indication from the authorities that the vehicle probably isn’t to blame for the fatality. Because public relations waits for nobody, Uber suspended its autonomous vehicles in the wake of the death of a woman in Tempe, but that didn’t keep fairly breathless headlines being painted all across the mainstream media. The stories that accompanied those headlines were more careful to mention that an investigation is required before anyone knows what actually happened, but the buzz created by the headlines wasn’t so nuanced. I actually saw this in my own office, where several people could be heard mentioning that autonomous vehicles were now done.

But that was always silly. It’s an awkward thing to say, but the fact that it took this long for AVs to strike and kill a pedestrian is a triumph of technology, given just how many people we humans kill with our cars. Hell, the Phoenix area itself had 11 pedestrian deaths by car in the last week, with only one of them being this Uber car incident. And now all of that hand-wringing is set to really look silly, as the Tempe police chief is indicating that no driver, human or AI, would likely have been able to prevent this death.

The chief of the Tempe Police has told the San Francisco Chronicle that Uber is likely not responsible for the Sunday evening crash that killed 49-year-old pedestrian Elaine Herzberg.

“I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” said Chief Sylvia Moir.

Herzberg was “pushing a bicycle laden with plastic shopping bags,” according to the Chronicle’s Carolyn Said, when she “abruptly walked from a center median into a lane of traffic.”

After viewing video captured by the Uber vehicle, Moir concluded that “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.”

So, once again, this tragedy has almost nothing to do with automobile AI and everything to do with human beings being faulty, complicated creatures that make mistakes. We don’t need to assign blame or fault to a woman who died to admit to ourselves that not only did the self-driving car do nothing wrong in this instance, but also that it might just be true to say that the car’s AI had a far better chance of avoiding a fatality than the average human driver. The car was not speeding. It did not swerve. It did not adjust its speed prior to the collision.

This obviously isn’t the conclusion of the police’s investigation, but when the police chief is already making these sorts of noises early on, it’s reasonable to conclude that the visual evidence of what happened is pretty clear. Sadly, all this likely means is that the major media websites of the world will have to bench their misleading headlines until the next death that may or may not be the fault of a self-driving vehicle.

Filed Under: , , , , ,
Companies: uber

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Tempe Police Chief Indicates The Uber Self-Driving Car Probably Isn't At Fault In Pedestrian Death”

Subscribe: RSS Leave a comment
104 Comments
20 is plenty says:

Re: Re: 20 miles per hour

The vehicle was travelling 38 mph – 3 over the speed limit.
Had the vehicle been traveling 20 mph, the chances of fatal impact would have been dramatically reduced.
https://usa.streetsblog.org/2016/05/31/3-graphs-that-explain-why-20-mph-should-be-the-limit-on-city-streets/

The blame lies with the street engineers and politicians who designed a pedestrian-hostile roadway to begin with.

Richard (profile) says:

Re: Re: Re: Re:

in that area in that week alone, 10 other pedestrians were killed by human drivers.

That suggests that the roads in that area are badly designed.

However, one thing bugs me. Even if the self driving car was completely blameless it was only there are part of an experiment.

If the experiment had not been taking place the woman would still be alive.

I don’t think this experiment would get though ethical review in my institution.

PaulT (profile) says:

Re: Re: Re:3 Re:

Exactly. The questions here are very simple:

– Did the AI do something to cause the accident?
– Could a human driver have done something to avoid the collision?

If the answer to both of these is no, the controversy is fairly pointless. If the same result would have happened if you recreated the incident with a human being behind the wheel, then it’s just an unfortunate accident. Those, sadly, happen. Nobody’s talking about banning tech because of the other 10 dead people that week, and it’s likely some of those were actually avoidable.

PaulT (profile) says:

Re: Re: Re:2 Re:

“That suggests that the roads in that area are badly designed.”

Yes. That would remain true whether or not an AI is driving, though. The point is, while everybody’s talking about this as a big scandal, it’s apparently not even 10% of the traffic deaths in that city over a single week. It’s only made it past the local news because of the identity of the driver.

I’d almost like to see the stats on those other deaths – which are not being talked about because they were such “ordinary” deaths. Are those situations where this type of AI would have saved lives? If they were the result of people texting, drink driving, etc., then the answer is possibly yes. If the result of people driving like idiots, then yes. You have to take these as an overall view, not fixate on an outlier and demand changes based on those.

“If the experiment had not been taking place the woman would still be alive.”

The first pedestrian ever killed by a car would have been too if that experiment hadn’t taken place. Well, not now maybe, but you get my point. It might sound a little mercenary, but the idea of this kind of tech is to help prevent deaths, not magically reduce them to zero (although I’m sure everyone would prefer that if it were possible).

“I don’t think this experiment would get though ethical review in my institution.”

So, when do you believe that public testing of any technology should take place? If you’re waiting for perfection, that will never happen, and even if you could actually perfect it in the lab people will still get hurt the first moment the resulting system fails to react correctly to something you hadn’t anticipated in your controlled tests. That’s the point of this kind of public testing – shit happens in real life that engineers don’t anticipate and they need the data to improve their systems to avoid future incidents when the end product is released.

That’s the problem here – the first fatal accident, one that barely registers as a blip in overall traffic incidents – is getting people to forget what reality looks like. If you want technology to not be endangering people, you have to test it in the real world. That might sound heartless, even cruel, but if this stuff helps prevent thousands of deaths in the future as it has the potential to, then these tests have to happen.

Anonymous Coward says:

You hear that, guys? The automated vehicle wasn’t at fault! We can continue praising automated vehicles to the heavens while downplaying the fact that the transition away from human drivers to fully-autonomous vehicles will be a huge giveaway of power to corporations and governments. Full speed ahead to a Utopia fully owned, operated, and monitored 24/7 by Tesla, GM, Uber, and the NSA!

Wendy Cockcroft (user link) says:

Re: Re: Re:

Eh, PaulT, it’s worth considering that, at least in theory, it’d be possible to hack into the car and control it remotely.

Even air-gapping doesn’t always work, per the revelations over state-sponsored malware attacks, e.g. Stuxnet. Personally, I’d rather be able to maintain full control over my vehicle at all times.

PaulT (profile) says:

Re: Re: Re: Re:

“it’s worth considering that, at least in theory, it’d be possible to hack into the car and control it remotely”

Yes, and that needs to be taken fully into consideration when this passes the prototype stage. But, in terms of state-wide conspiracy to cause traffic accidents, I’d say that’s less likely than the methods they already have now. I’m not saying it’s impossible, just that your chances of being affected negatively are going to be far less than being harmed in a traffic accidents caused by a bad human driver right now.

Richard (profile) says:

Re: Re: Re:

Yes, you’ll hand power over to the… mostly the same government and corporations that control the current system, and are largely responsible for you needing a car in the first place.

NO

you’ll hand EVEN MORE DETAILED AND FINE GRAINED power over to the… mostly the same government and corporations that control the current system, and are largely responsible for you needing a car in the first place.

That Anonymous Coward (profile) says:

Normally there is just a black box with a few details from readings that need to be put back together.
Since AI has to always watch (and we live in a hyper litigious world) I am not shocked there is video available (not to mention lidar or other sensor outputs).

This does sort of show that the problem in most of the equations of self driving cars is humans.
The google cars had a bunch of accidents, all caused by humans.
The tesla cars have had a few accidents, some of the worst ones seem to be human hubris.

The rules say that humans only cross the street at marked crossings, someone crossing in the middle of the street in the dark is something humans do. They expect that cars will stop for them, because humans are self centered & self focused.

Anonymous Coward says:

You believe police when exculpates Uber. -- WAIT FOR THE TRIAL.

First, you haven’t seen the video, just rely on likely slanted characterization by cop who wants this experimentation to proceed. We’ll need to dig into any payoffs made by Uber to council members or city to allow this menace on the roads.

Anonymous Coward says:

Re: You believe police when exculpates Uber. -- WAIT FOR THE TRIAL.

“On the median” doesn’t fit with “stepped from the shadows”, nor with the superior low-light ability of a video camera, nor right front of SUV dented.

This ain’t over until lawyers have argued to jury that the heartless giant corporation loosed this faulty robot to attack unsuspecting humans.

Anonymous Coward says:

Re: Re: You believe police when exculpates Uber. -- WAIT FOR THE TRIAL.

Also, my take that a human would expect person on the median to be unpredictable and slow down or turn to clear is still accurate (minion makes a point of stating that the SUV didn’t slow or swerve…): if she’s in video for more than half a second, then the car with its "superior" reaction time is faulty.

Anonymous Coward says:

Re: Re: Re:2 Replying to yourself twice makes you look crazy

None of those comments should be flagged by someone who can’t possibly imagine that a postscript to a comment should not be allowed and flagging a comment multiple times to get it “FLAGGED BY COMMUNITY” should be STOPPED. IDIOTS

Jeffrey Nonken (profile) says:

Re: Re: Re: You believe police when exculpates Uber. -- WAIT FOR THE TRIAL.

So… what you’re saying is that, because you haven’t seen the video and weren’t there to witness the accident, and aren’t familiar with the road in question nor the conditions present at the time, then the car must ipso facto be at fault. The cop has seen the video, so he must be lying because it disagrees with your narrative.

Anonymous Coward says:

Re: Re: Re:2 You believe police when exculpates Uber. -- WAIT FOR THE TRIAL.

To try to get you to notice this UPDATE: “But two experts who viewed the video told…”

“… The Associated Press that the SUV’s laser and radar sensors should have spotted Herzberg and her bicycle in time to brake.”

Well, there ya go, kids. TWO experts compared to one sleazy donut-addicted corrupt cop. Usually, that’d clinch it for Techdirt, but not when it’s Uber!

Oh, and incidentally: MY TAKE IS PROVEN RIGHT. Cameras or processing likely FAILED.

Now, do I know when to believe cops or not NOT compared to you, Jeffy?

Christenson says:

Re: Thin blue line..where's my tinfoil hat??!!!

Cop (same guys that beat and murder citizens and harass the homeless, like the dead woman) says “unavoidable by the vehicle”.

As others note, the cop has a lot of potential motivation to say that.

I would like to see an independent assessment, or, better, the video itself. Transparency, please!

And **** yes, unprotected pedestrians near 35mph traffic lanes scare me, enough to make me do things like pick them up and give them rides.

Christenson says:

Re: Re: Re: Thin blue line..where's my tinfoil hat??!!!

Note my motivations are potential. There’s fair and honest cops, and there’s crooks, and all shades in between.
1) Uber is waving a lot of money around, so the city fathers would like to protect their little golden goose.
2) The woman killed was homeless and penniless..the sort of person cops would prefer not to have to deal with.
3) That woman might also not have the right ethnic background as far as the cop is concerned. She’s one of THOSE people, for your favorite definition of “US”. Not “OMG, that could have been my sister!”

Christenson says:

Re: Re: Re:3 Thin blue line..where's my tinfoil hat??!!!

Remember, all these things are potential…as in, evidence needed! (And dark skin isn’t the only kind of “wrong” ethnicity, especially not in Arizona)
4) Others note this woman was a druggie of some kind… not the RIGHT kind of person. (though that does tend to indicate she may have been impaired and hard to avoid)

PaulT (profile) says:

Re: Re: Re:4 Thin blue line..where's my tinfoil hat??!!!

“Remember, all these things are potential…as in…”

…figments of your imagination! It doesn’t matter if she could potentially have been a bunch of hamsters tied together, the evidence we know already disproves that.

“evidence needed!”

We have evidence. But you’ve chosen to ignore that in favour of some race-baiting and chasing around some half-assed conspiracy theory instead of the sad accident it obviously was.

“(though that does tend to indicate she may have been impaired and hard to avoid)”

It also indicates that she could have been suicidal and got herself killed deliberately. Which is a far more likely scenario than your cooked-up conspiracy to rid the streets of undesirables, by having a private company destroy its potential for massive future profits by acting as an automated hit squad.

Christenson says:

Re: Re: Re:5 Thin blue line..where's my tinfoil hat??!!!

On my tin-foil hat:

My original point was that the head cop could be biased towards not blaming Uber for this tragic accident, and I listed some obvious ways that might happen, some obvious possible sources of that bias, based on generalizations about the worst cops we keep hearing about. This is why I want someone independent to look at that footage!

I’m quite sure (e.g. ex-con for backup driver) that it was, in the end, an accident on the part of Uber, with no intentionality, and the cops aren’t encouraging this kind of thing.

Back at the programming farm, I’m sure the programmers are setting up the vehicles to recognize an unrecognized object moving at the side of the road as a hazard to slow down for and possibly change lanes away from, just like a cautious human driver.

Gerald Robinson (profile) says:

Re: Re: Re:6 Thin blue line..where's my tinfoil hat??!!!

Having been extensively involved with computers sense 1964 I can say that unless the specifications are published and the programs open source there is no reason to believe this. The corporations will cheat and take shortcuts then lie about it.

Under at least one interpretation of the DMCA Anti Cimcurvation Provisions any examination of the specs or code is illegal.

PaulT (profile) says:

Re: Re: Re:6 Thin blue line..where's my tinfoil hat??!!!

“My original point was that the head cop could be biased”

There’s a lot of things he could be. There’s no evidence that he is, and half the examples you came up with were immediately disproven by the evidence we already have available.

“This is why I want someone independent to look at that footage!”

Do you demand this of every crash, or just this one? How do you know they aren’t already hiring hit squads to run people over, like the other 10 people who died in that area that weekend?

“Back at the programming farm…”

They will constantly be taking in results of every test and making changes/improvements based on the data, since this is a damn beta test and the entire point. In the process, making them even less likely to crash compared to human drivers than they already are.

Gerald Robinson (profile) says:

Re: Re: Re:7 Thin blue line..where's my tinfoil hat??!!!

‘”Back at the programming farm…”

They will constantly be taking in results of every test and making changes/improvements based on the data, since this is a damn beta test and the entire point. In the process, making them even less likely to crash compared to human drivers than they already are.’

Fat chance! There is little incentive for the vendor (so far most car companies just buy from some one else) to make any changes which are inconvenient or cost money. There is no way to find out what is done and it is illegal to try!

PaulT (profile) says:

Re: Re: Re:8 Thin blue line..where's my tinfoil hat??!!!

“There is little incentive for the vendor (so far most car companies just buy from some one else) to make any changes which are inconvenient or cost money.”

Well, good news in that case! Having not only a brand but an entire technology tainted with a reputation for randomly killing pedestrians is going to cost an immense amount of money, thus the problem will be dealt with quickly. Nothing will be less convenient or costly to Uber than having this project not end with production models that can be sold, and those will be dependent on government licensing, which will not be granted with a bunch of examples like this one on the books. They absolutely have to fix this kind of problem.

“There is no way to find out what is done and it is illegal to try!’

Same as every other non-FOSS tech company, then. So? Do you demand the code on the plane’s control systems before getting on one?

Gerald Robinson (profile) says:

Re: Re: Re:9 Thin blue line..where's my tinfoil hat??!!!

The flight code is not protected from 3rd party audit by the DMCA. YES IT MAY HURT THE CAR COMPANY BUT THEY CAN’T DO ANYTHING ABOUT IT EXCEPT FIND ANOTHER VENDOR. Looks like there will only be 2/4 vendors. No way to even find out if the vendor is at fault.

Current cars have a built in problem: the HUI is common to the navigation/entertainment center. Yes the car still has the usual controls but increasingly safety related operations depend on the HUI. As the control/data bus is common what happens when someone plays a video that takes 90% of the bus?

Ai cars have a major weakness shown in the video, the driver will not be paying any attention after a few minutes and so is likely to do the wrong thing if he overrides the Ai at all. The driver is just another point of failure not a safety backup!

PaulT (profile) says:

Re: Re: Re:10 Thin blue line..where's my tinfoil hat??!!!

“The flight code is not protected from 3rd party audit by the DMCA.”

As is all non-open sourced software. Again, do you demand complete transparency from every other car, plane, train, etc. company, or just this one? What is your actual point, other than wishful thinking about something that doesn’t exist anywhere else in the transport industry?

Writing in capital letters doesn’t make you somehow correct, you have to say something worthwhile with the words as well.

“Ai cars have a major weakness shown in the video”

As do cars driven by humans. The question is, are the AI cars worse, or better? If better, and significantly so (which data so far is suggesting) then what’s the issue? Demanding absolute perfection is stupid when it doesn’t exist in any of the alternatives. The same flaw you’re talking about exists in all cars already.

Anonymous Coward says:

>if she’s in video for more than half a second, then the car with its “superior” reaction time is faulty.

No, this is the same situation as automobile-train collisions, which are ALWAYS considered the automobile’s fault. Because both of them can see each other, and whatever the reaction time is, inertia lets the car stop faster. The lighter pedestrian can always get out of the way of the car. (and, for that matter, the mosquito–unless extremely distracted–can get out of the way of the human hand).

The conclusion here is obvious. Require AI controllers on all bicycles. Stop bicycle traffic worldwide until this is implemented. Disagree with me, and you’re nothing but a mass murderer.

Jeffrey Nonken (profile) says:

Re: Re:

“Disagree with me, and you’re nothing but a mass murderer.”

Hey, I’ve got over 3000 hours on Payday 2, where you murder cops by the score, if not by the hundred, and killing civilians only costs a nominal cleaning fee. It’s a video game, so of COURSE I’m a mass murderer. I just haven’t bought my fully semi-automatic [sic] military murder rifle yet.

And then there’s the Borderlands franchise. And Overwatch. Various flavors of Team Fortress. Half Life and friends. At least in Killing Floor you’re only killing zombies, but still. Over ten thousand hours killing people by clicking a mouse! I’m hopeless.

Definitely a mass murderer.

Anonymous Coward says:

"pushing a bicycle laden with plastic shopping bags,"

I want to know more about this woman’s life. Here’s what I know so far.

Has a bicycle but wasn’t riding it
Bike covered with plastic bags (contents unknown)
Meandering around in the wee morning hours with said Kroger-sponsered BMX
Crosses the road without looking both ways
Had already crossed this road once because she was walking her pet CVS bike down the median.

I wish not to mourn her death but celebrate her life. For her life seemed truely unique.

Anonymous Coward says:

Re: "pushing a bicycle laden with plastic shopping bags,"

The internet makes it easy to find out more about her life.

She was an unwed mother at 15. Her recent life involved a series of drug arrests, some for ‘dangerous drugs’. That’s a charge usually reserved for heavy meth or bath salt use. She was likely living in a nearby homeless encampment, which is literally “down by the river”. This dark stretch of highway was known for drug use and sales.

Perhaps a bit more than you really wanted to know…

Anonymous Coward says:

You have no proof besides the cops statement, which in this country should always be taken with a grain of salt (until the video is released). Don’t accuse others of hand-wringing without proof then to turn around and suck up to uber while being in the exact same position as those you are criticizing.

Anonymous Coward says:

Test Case

In a related note, Charles Stross blogged about a scenario of the near future:

The driver is monitoring their vehicle remotely from their phone, using a dash cam and an app provided by the vehicle manufacturer but subject to an EULA that disclaims responsibility and commits the driver to binding arbitration administered by a private tribunal based in Pyongyang acting in accordance with the legal code of the Republic of South Sudan.

Immediately before the accident the dash cam view was obscured by a pop-up message from the taxi despatch app that the driver uses, notifying them of the passenger pickup request. The despatch app is written and supported by a Belgian company and is subject to an EULA that disclaims responsibility and doesn’t impose private arbitration but requires any claims to be heard in a Belgian court.

The accident took place in Berwick-upon-Tweed, England; the Taxi despatch firm is based in Edinburgh, Scotland.

Discuss!

Christenson says:

Re: Test Case

I’ll bite, but let’s blur the phone into a VR/AR headset so the “driver” really could be in direct, short-term control of the vehicle and not just an overly controlling dispatcher.
1) Is iOS really good enough for non-redundant life-critical apps??? What about my cellular network that can take a few hours to deliver my text messages, and doesn’t work in some of my favorite places, which are easily reached by car?

2) The smart-phone monitoring significantly attenuates the driver’s situational awareness…Bam! the car is hit from the side by…a deer? a car? our bag Lady from Phoenix? a rock from an angry pedestrian? now what? Did the end-user even notice?

3) A reasonable operating system allows the computer’s owner (and in fact gives them the responsibility) to make the dash cam app UI uninterruptible.

The EULA parts of this are clearly out of control, with a bunch of parties whose software failures could be involved:
Random user apps on the phone
The dispatch app
The VR driving app
The OS vendor
The phone network provider

The crux is the diffusion of responsibility — and, since “the accident” is supposed to be covered by insurance, I would expect that the insurance companies would eventually demand a single venue before insuring such vehicles. If mismanaged, this could easily lead to a closed ecosystem with very high entry barriers and buggy software. Not that total openness doesn’t have its own problems — if it’s widely inspected, it can also be copied easily, so the code itself cannot be used to maintain any kind of monopoly — which is an awful lot of the innovator’s motivation.

Anonymous Coward says:

It's also a wrong thing to say

“It’s an awkward thing to say, but the fact that it took this long for AVs to strike and kill a pedestrian is a triumph of technology, given just how many people we humans kill with our cars. Hell, the Phoenix area itself had 11 pedestrian deaths by car in the last week, with only one of them being this Uber car incident. “

This is, of course, not even close to a correct comparison. If you want to assess accident rates, then use operator-hours or operator-miles. If you do either, you’ll find that the accident rate/fatality rate for human-operated vehicles is vastly lower than that for automated ones.

As an aside, operator-hours are often a better metric in urban areas and operator-miles a better one in rural areas or when considering only highway driving. Why? Because traffic congestion in urban areas means that drivers spend more hours driving fewer miles, and thus using operator-miles skews the outcome. But in rural areas or in highway-only studies, this effect is far less and so operator-hours and operator-miles tend to correlate more closely.

Will B. says:

Re: It's also a wrong thing to say

“If you do either, you’ll find that the accident rate/fatality rate for human-operated vehicles is vastly lower than that for automated ones.”

Of course it does, because comparing operator-hours or operator-miles of hundreds of thousands of person-driven cars versus a handful of automated cars VASTLY skews the numbers. If this one accident had not happened, then AI-driven cars would have LITERALLY been infinitely better – as in, X number of operator hours divided by ZERO deaths.

You don’t have the necessary scale to judge things this way, and attempting to do so skews the numbers far, far worse than just comparing deaths per week as the article does. Get back to me when the number of operator-hours for AI driven cars are even REMOTELY comparable to human-driven.

Anonymous Coward says:

Re: Maybe

This situation is not looking very interesting legally.

However, there was no sign of the car slowing before the accident. While the pedestrian was basically doing everything to make it difficult for the car not to kill her, the lack of any response from the car is concerning from a technical standpoint and it may be cause for recalibrating sensor-AI /alarming when the sensor is impaired etc.

Btw. 100% autonomous is still a wet dream and you would need servicing in detail a.la. airplanes to reduce hardware failures to a minimum. That is a requirement for level 4 autonomous driving.

Most current projects operate level 2 and 3 which are human intervention required under certain conditions. That is not enough yet to have the cars operate outside their kindergarden and/or not without adult supervision. Autonomous cars can to some degree replace busses at the moment, but it can’t replace taxis.

If only we could have car crash investigations be as open as air crash investigations…

Anonymous Coward says:

Re: Re: Re: Maybe

Not necessarily, when the person is hidden by bushes until almost the moment they step out in front of the car. Just how do you ‘see’ a person under those condition?

Doesn’t "pushing" the bicycle imply she’s behind it, and it would have been visible first? (With Lidar, even in the dark.)

Human reaction time is ~100 ms, and we expect better from machines. If she was visible for more than that, the car should have started to slow, not necessarily in time to save her life.

Anonymous Coward says:

Re: Re: Re:2 Maybe

People usually push a bicycle by holding the handlebars, and depending on angles, the front wheel and any bags draped over it may become visible slightly sooner. Also someone can go from invisible due to an obstruction to in front of the car by moving a couple of feet, and be hit before the mechanics of backing of the gas and apply the brakes can have any effect on the vehicle, even when computer controlled.

Also, a partial view of a moving bike wheel is not that easy for software to distinguish from noise or a bird etc.

PaulT (profile) says:

Re: Re: Maybe

“However, there was no sign of the car slowing before the accident.”

That doesn’t mean that no attempt was made, only that the car wasn’t able to noticeably reduce its speed before the impact. There may not have been time to react in terms of momentum, but that doesn’t necessarily mean that no reaction too place.

We’ll know more when the logs are examined and whatever results are released, but the laws of physics are still in operation no matter what’s driving the car.

Anonymous Coward says:

Re: Re: Re: Maybe

but the laws of physics are still in operation no matter what’s driving the car.

Friction applies all the time. If the car "released the gas pedal" it would have started to slow, and it doesn’t take an amazing accelerometer to detect that. The logs would show the action and the resulting (however small) slowdown.

PaulT (profile) says:

Re: Re: Re:2 Maybe

“If the car “released the gas pedal” it would have started to slow”

Yes, and at 38mph, how much would it have slowed in the space of a few seconds before impact? 1 mph? Enough to be noticeable by the people gathering evidence on the scene? I don’t think so.

Let’s wait for the investigation, but so far there’s no evidence to suggest there wasn’t a reaction from the car.

“The logs would show the action and the resulting (however small) slowdown.”

Yes, they will, or they’ll prove otherwise. Since we haven’t seen those logs yet, why are you assuming that it didn’t? You’re basing everything on the assumption that there was no response from the car, whereas it’s more likely that there wasn’t enough time between seeing the danger and reacting for it to have any effect on that car’s momentum. Just as there probably wouldn’t have been if a human was driving under the same circumstances.

Will B. says:

Re: Re: Re:3 Maybe

What I find more interesting is the question of whether and when slamming on the brakes is the wrong idea. If there is literally not enough time to react and ptevent the collision, but there is another car riding up your arse, slamming on the brakes can actually cause a larger and more deadly accident than simply accepting the inevitable. It’s strange for us to consider, and seems wrong at an instinctual level, but it’s the sort of reaction an AI car may have.
(Note that I’m not saying it happened in this case, only that it’s an interesting thought exercise. It seems to be one the article indulges in too, when they mention the car didn’t stop or swerve.)

Anonymous Coward says:

Dark

Why is darkness a defence – don’t these cars have LIDAR (or similar) – exactly so they can be better than people in noticing potential risks in bad lighting conditions?
Or is that all just so much PR?
.. And car headlight?

I have bad new for Uber AI development – in the UK pedestrians may cross roads anywhere (yes we have specific crossings but no concept of jaywalking) – in UK if you see a person ion a road refuge etc. need to drive on assumption something stupid may happen. Especially if they have a baby buggy – they always seem to point that into traffic first!
.. plus badly / non lit roads are common in lots of UK

Gerald Robinson (profile) says:

The problem isn’t Ai cars it’s the DMCA Anti Circumvation provision. Unaudited software is dangerious. The auto companies can not be trusted to do so on their own (VW, BMW emissions faking GM, recent airbag scandal, …).

The only solution that seems reasonable to me is to require all Autonomous car software to only be open source with full disclosure! Otherwise, in many cases, it will be impossible to determine fault-particularly if the courts follow the bad president of not requiring full disclosure of the code and all supporting docs. Even if disclosure is required review of the code is prohibitively expensive for one person or entity. With open source many people will review the code and some even make money by publishing tweaks or collecting bounties for bugs. e.g. the remote unlock unlocks: just the drivers door, all doors, or some combination; rather than the vendors choice.

Anonymous Coward says:

UPDATE: "But two experts who viewed the video told..."

“… The Associated Press that the SUV’s laser and radar sensors should have spotted Herzberg and her bicycle in time to brake.”

Well, there ya go, kids. TWO experts compared to one sleazy donut-addicted corrupt cop. *Usually, that’d clinch it for Techdirt, but not when it’s Uber!*

Oh, and incidentally: MY TAKE IS PROVEN RIGHT. Cameras or processing likely FAILED.

Anonymous Coward says:

OMG! Should have read ALL first! Driver two felony convictions!

Tempe police have identified the driver as 44-year-old Rafael Vasquez. Court records show someone with the same name and birthdate as Vasquez spent more than four years in prison for two felony convictions – for making false statements when obtaining unemployment benefits and attempted armed robbery – before starting work as an Uber driver.

Typical Uber employee, explains the rapes and robberies.

Oh, and apparently wasn’t paying attention, either.

Now, this being Techdirt with hateful fanboys, I won’t get least admission that my take was right and the above censoring was, as always, just because Techdirt can’t stand any opposition.

Nonetheless, I’m having last laugh, as usual.

Wendy Cockcroft (user link) says:

Re: OMG! Should have read ALL first! Driver two felony convictions!

So the censorship fan is whining about having his posts hidden; typical OOTB.

As for “typical Uber employee” would you care to provide a citation? Yes there have been horrible incidents but “Typical” means: “adjective
1.
having the distinctive qualities of a particular type of person or thing.
“a typical day”
synonyms: representative, classic, quintessential, archetypal, model, prototypical, stereotypical

So… is bad behaviour typical of an Uber driver? I’ve used Uber and never had a bad experience with them.

The “above censoring” is us sweeping your nonsense out of the way so we can get more quickly to the comments that are actually worth reading. Yours are not. As for opposition, I don’t believe I’ve ever had a comment hidden even when I’ve argued with other commenters.

Thad (user link) says:

The video’s been released. I haven’t watched it but I’ve read several reactions, which disagree with the comments made by the chief of police.

ExtremeTech: New Video of the Tempe Crash Looks Really Bad for Uber and Its Driver

The Pedestrian Didn’t Emerge Suddenly Out Of Nowhere

Initial statements from the Tempe Police painted a fairly clear, if ultimately problematic, picture of the pedestrian having suddenly appeared “out of the shadows” — as if she had been invisible behind foliage on the median until the last second — and suggested that the resulting collision might have been unavoidable. However, the dashcam video tells a somewhat different story. While the collision itself might have been unavoidable, there should have been enough time to at least mitigate the impact by braking or swerving. The woman who was killed is already well out into the roadway as the Uber vehicle approaches. She has already slowly walked across the left lane of the two lanes and is in the middle of the right lane even before the vehicle’s headlamps are close enough to illuminate her.

Associated Press via New York Daily News: Experts: Uber SUV’s autonomous system should have seen woman

Experts who viewed the video told The Associated Press that the SUV’s sensors should have seen the woman pushing a bicycle and braked before the impact.

Also, Uber’s human backup driver appears on the video to be looking down before crash and appears startled about the time of the impact.

"The victim did not come out of nowhere. She’s moving on a dark road, but it’s an open road, so Lidar (laser) and radar should have detected and classified her" as a human, said Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles.

Sam Abuelsmaid, an analyst for Navigant Research who also follow autonomous vehicles, said laser and radar systems can see in the dark much better than humans or cameras and that the pedestrian was well within the system’s range.

"It absolutely should have been able to pick her up," he said. "From what I see in the video it sure looks like the car is at fault, not the pedestrian."

It bears noting that it’s still early and the investigation is ongoing. NTSB investigators will have access to more information than the video.

Nonetheless, given that we’ve got an article focused on an early reaction by the chief of police, I think it would be fair to put up another one discussing the early reactions of researchers who disagree with his interpretation of the video.

PaulT (profile) says:

Re: Re:

The problem is, there’s a bunch of reactions to the video, and most of them are biased. People who want it to be the car’s fault are coming up with a multitude of ways that the video shows things that should have been reacted to, while others seem to be denying things that are seen in the video are there at all.

That’s the problem with this kind of thing, everyone’s sorted themselves out into tribes, and once people are in them then it’s hard to get them out even with empirical evidence. Let’s see what else is revealed, but this kind of piecemeal approach is always frustrating to the conversation as people try to second guess every piece of evidence before and after it’s been released.

fixitmanarizona says:

driver at fault too

Well, in Tempe, anyway, the driver IS AT FAULT, as is the company he works for. The driver was obviously texting while driving and NOT IN CONTROL of the vehicle. Also, he is guilty of speeding. The MAXIMUM speed limit was 35 and he was going 38.
Uber should not hire drivers who will allow the vehicle to speed, nor who take their attention from the road for even a second. Also, they should not allow their cars to be “self-driving” as this does not allow the driver to be in control of the vehicle.
They COULD however, institute software that does not allow speeding, and also pulls the car to the side of the road, ejects the driver and locks it if EVEN FOR A SECOND their driver takes his attention off the road.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...