Google's Self-Driving Car Causes First Accident, As Programmers Try To Balance Human Simulacrum And Perfection
from the get-out-of-my-lane,-Dave dept
Google’s self-driving cars have driven millions of miles with only a dozen or so accidents, all of them being the fault of human drivers rear-ending Google vehicles. In most of these cases, the drivers either weren’t paying attention, or weren’t prepared for a vehicle that was actually following traffic rules. But this week, an incident report by the California Department of Motor Vehicles (pdf) highlighted that a Google automated vehicle was at fault in an accident for what’s believed to be the first time.
According to the report, Google’s vehicle was in the right-hand turn lane in a busy thoroughfare in Google’s hometown of Mountain View, California, last month, when it was blocked by some sand bags. It attempted to move left to get around the sand bags, but slowly struck a city bus that the car’s human observer assumed would slow down, but didn’t. All in all, it’s the kind of accident that any human being might take part in any day of the week. But given the press and the public’s tendency toward occasional hysteria when self-driving technology proves fallible, Google’s busy trying to get out ahead of the report.
Google historically compiles monthly reports for its self-driving car project; and while its February report addressing this specific accident hasn’t been made public yet, Google’s been releasing an early look to the media. In the report, Google notes that just like humans, trying to predict another driver’s behavior isn’t always successful on the road:
“Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.
This is a classic example of the negotiation that?s a normal part of driving ? we?re all trying to predict each other?s movements. In this case, we clearly bear some responsibility, because if our car hadn?t moved there wouldn?t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.
We?ve now reviewed this incident (and thousands of variations on it) in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
Live and learn. Or compute and learn. Whatever. If automated vehicles were going to cause an accident, it’s at least good that this appears to be an experience (don’t give city bus drivers the benefit of the doubt) programmers will learn from. The problem historically is that like so many technologies, people are afraid of self-driving cars. As such, automated vehicles can’t just be as good as human beings, they’ll have to be better than human beings for people to become comfortable with the technology seeing widespread adoption.
By any measure self-driving cars have been notably safer than most people imagined, but obviously there’s still work to do. Recent data from the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab suggests that self-driving cars have twice the accidents of human-driven vehicles — but again largely because people aren’t used to drivers that aren’t willing to bend the rules a little bit. Striking an acceptable balance between having an automated driver be perfect — and having an automated driver be more human like — is going to be a work in progress for some time.