When a car T-boned a self-driving Uber in Tempe, Arizona last Friday, it brought the company’s autonomous automobile test program to a screeching halt.
No serious injuries were reported. Yet, the autonomous car was not at fault for the accident. The other driver failed to yield. But, Uber suspended testing not only in Arizona, but also in Pittsburgh and California.
Is this a case of a small step forward and two steps back?
Any accident is scary. Pictures from the incident frighteningly show Uber’s Volvo XC90 flipped on its side. But the timing is also unfortunate. It comes months after the company pulled its pilot program in California after getting negative press.
Uber’s self driving program formally began testing self-driving cars on San Francisco streets in December 2016. As a city known for courting both freethinkers and the technorati, it would seem that the streets in the City by the Bay would be a natural place for autonomous cars to prowl. But Uber entered into a spat with the State on the first day of testing.
The State argued that Uber didn’t apply for the special permit it needed to test autonomous vehicles on the roads. In response, Uber stated that because its vehicles required drivers to be in the self-driving car, it didn’t need the permit. Reports about self-driving Ubers breaking traffic laws and complaints from cyclists put the kibosh on the program in less than a week. Even after all that, Uber recently obtained the proper permitting in California before the program ground to a halt this weekend.
We Need The Robots
The larger question is: What type of lasting impact will the vehicle crash have on the future of self-driving cars?
In some ways, it reminds us that we need the robots. Over 40,000 people died on the road last year. Humans do a particularly bad job at driving. At least, they suck when they’re drunk (40 percent of crashes), distracted (16 percent of crashes), drowsy (CDC estimates 70,000 crashes) or seeing red, a.k.a. road rage (AAA says that nearly 6 million people have bumped or rammed another car on purpose due to road rage).
Robots, in contrast, are far more reliable. Indeed, that’s what the National Highway Traffic Safety Administration concluded when they investigated a fatal Tesla crash last May. Although the driver was killed with the Autopilot engaged, the NHTSA found that among Tesla cars on the road, those equipped with Autopilot crashed 40 percent less frequently than those without.
Basically, it comes down to this: the technology doesn’t have to be perfect. It just has to be better than humans. Elon Musk, Tesla’s CEO even framed its self-driving initiatives in “moral” terms and blamed the media for hyping these types of accidents.
“You need to think carefully about this, because if, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people,” Musk said.
Morality vs. Money
Of course, no reporter is going to be charged with manslaughter for reporting the ups and downs of an exciting new technology that promises to change nearly everything.
But it could be argued that Uber, which relies on an expensive fleet of human drivers, has the most to lose from negative publicity. If it leads the race and eliminates the need for those drivers, it could increase revenues by several multiples. But if it doesn’t, the company itself could become obsolete. And, have many more crashes.
That’s why it will be interesting to see how long Uber will – or can – sit out this crucial debate. Uber did the politically expedient thing when it stopped its self-driving program. Short-term gain for long-term cost.
Fully automated cars are still a long way off. But no one should be ceding ground in what could be an issue of life or death. Literally.
Article written by MG Rhodes. Submitted: 3/27/17
Comments & thoughts to: firstname.lastname@example.org