There's a bunch of talk today over the news that one of Google's self-driving autonomous vehicles
apparently got into a minor fender bender
. Google was quick to point out that it was actually under human control
at the time, so really there's not much of a story here. However, since it's leading to a variety of discussions about how "scary" autonomous vehicles are, why don't we just get an important point out of the way: there's no way
that autonomous vehicles will have a perfect track record and never, ever get into an accident. They will crash. It's just a matter of time. The real question is not whether or not they will crash, but whether or not the likelihood of getting into an accident (or the likelihood of the seriousness of any such accident) is significantly higher or lower than with a human at the controls. I'm certainly not confident in the state of the art today to be safer, but I find it likely that it won't be long until such vehicles have a much higher probability of getting you to your destination safely than a human-driven vehicle.