Earlier this month Google announced that the company's self-driving cars have had just thirteen accidents since it began testing the technology back in 2009, none the fault of Google. The company has also started releasing monthly reports
, which note Google's currently testing 23 Lexus RX450h SUVs on public streets, predominately around the company's hometown of Mountain View, California. According to the company, these vehicles have logged about 1,011,338 "autonomous" (the software is doing the driving) miles since 2009, averaging about 10,000 autonomous miles per week on public streets.
With this announcement about the details of these accidents Google sent a statement to the news media informing them that while Google self-driving cars do get into accidents, the majority of them appear to involve the cars getting rear ended at stoplights, at no fault of their own:
"We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That's two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving—and still, not once was the self-driving car the cause of the accident."
If you're into this kind of stuff, the reports
(pdf) make for some interesting reading, as Google tinkers with and tweaks the software to ensure the vehicles operate as safely as possible. That includes identifying unique situations at the perimeter of traditional traffic rules, like stopping or moving for ambulances despite a green light, or calculating the possible trajectory of two cyclists blotto on Pabst Blue Ribbon and crystal meth. So far, the cars have traveled 1.8 million miles (a combination of manual and automated driving) and have yet to see a truly ugly scenario.
Which is all immeasurably cool. But as Google, Tesla, Volvo and other companies tweak their automated driving software and the application expands, some much harder questions begin to emerge. Like, oh, should your automated car be programmed to kill you if it means saving the lives of a dozen other drivers or pedestrians? That's the quandary researchers at the University of Alabama at Birmingham have been pondering for some time
, and it's becoming notably less theoretical as automated car technology quickly advances. The UAB bioethics team treads the ground between futurism and philosophy, and note that this particular question is rooted in a theoretical scenario known as the Trolley Problem:
"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"
What would a computer do? What should a Google, Tesla or Volvo automated car be programmed to do when a crash is unavoidable and it needs to calculate all possible trajectories and the safest end scenario? As it stands, Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience. Companies like Google argue that automated cars would dramatically reduce fatality totals, but with a few notable caveats and an obvious loss of control.
When it comes to literally designing and managing the automated car's impact on death totals, UAB researchers argue the choice comes down to utilitarianism (the car automatically calculates and follows through with the option involving the fewest fatalities, potentially at the cost of the driver) and deontology (the car's calculations are in some way tethered to ethics):
"Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people," he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.
Deontology, on the other hand, argues that "some values are simply categorically always true," Barghi continued. "For example, murder is always wrong, and we should never do it." Going back to the trolley problem, "even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one," Barghi said. And, despite the odds, a self-driving car shouldn't be programmed to choose to sacrifice its driver to keep others out of harm's way."
Of course without some notable advancement in AI, the researchers note it's likely impossible to program a computer that can calculate every possible scenario and
the myriad of ethical obligations we'd ideally like to apply to them. As such, it seems automated cars will either follow the utilitarian path, or perhaps make no choice at all (just shutting down when encountered with a no win scenario to avoid additional liability). Google and friends haven't (at least publicly) truly had this debate yet, but it's one that's coming down the road much more quickly than we think.