Driverless Car Accidents & the Trolley Problem

Grappling with the moral choices that arise when driverless cars, inevitably, come face-to-face with bad weather.

Solving for Car Accidents

According to the U.S. Center for Disease Control, the leading causes of non-medical death are:

  •     Poisoning & Drug Overdose: 38,851 deaths per year
  •     Motor Vehicle Accidents: 35,398 deaths per year
  •     Unintentional Falls: 31,959 deaths per year
  •     Extreme Selfies: 73 deaths per year (not a big number, yet profoundly disturbing)

Motor vehicle accidents are also the number one cause of death in those aged 25 and under.

What we drive is not only a form of transport, but also an agent of harm. Mostly due to human error.

And, yet, in the next decade, those motor vehicles will undergo incredible transformation. Many of us will stop driving, while computer and AI technology will control most of our driving. “Leave the Driving to Us” will be less Greyhound and more Google.

Sure, driverless cars will reduce the number of accidents and deaths on the road. But, some will still occur.

Those accidents will be caused by computers, not humans. And, if we’re literally putting our lives in the hands of silicon chips, it’s fair to ask how they’ll decide what used to be a theological question: Who shall live, and Who shall die?

The “Trolley Problem” Dilemma 

That’s where the Trolley Problem comes in.

Formulated in 1967 by Philippa Foot, it’s a classic ethical problem… without a good solution. Here’s how Wikipedia describes it:

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the most ethical choice?”

The same impossible scenario can be formulated to a choice between two “activist” solutions:

“A trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?”

Related: Personal Flying Cars

The Driverless Car Dilemma

It’s easy to see how this ethical dilemma will occur in an era of driverless cars getting into accidents:

It’s a snowy night. Roads are slick. Paul is heading home in the driverless car he owns. The car hits a patch of black ice and starts spinning out of control, heading towards a crowded sidewalk. The onboard computer has a choice to make: It can let the car crash into a crowd of people, killing at least five (let’s keep the number the same as the Trolley Problem), but saving Paul’s life. Or, it can use traction control to steer the driverless car into a tree, killing Paul, but saving the lives of those innocent pedestrians.

What should we program the car to do in case of an accident? What is the ethical choice?

Obviously, the Trolley Problem shows us there is no clear-cut right answer.

But what if Paul owns that driverless car? Because, unlike the Trolley Problem, Paul has a biased stake as an owner. He’s not an uninvolved bystander making choices. It’s his car, and possession, they say, is nine-tenths of the law. Paul has possession. If Paul were the active driver, instead of a passenger in a driverless car, odds are that he’d choose to save his own life (unless he’s particularly heroic). It wouldn’t be a major moral leap to expect his driverless possession to make the same choice.

It’s more difficult, however, if Paul is in the driverless Uber of the future. Or, if he’s ride sharing. Then, the possession argument holds less sway. Rather, impartial regulation of what the onboard computer should do is likely a better choice. Deliver the greater good for the greater number. Because if Paul would happen to be one of those pedestrians tomorrow – instead of the Uber passenger today – he’d clearly make that choice. Held up to a vote, the five pedestrians would clearly vote to keep the car out of their path. And, they’d outnumber Paul, the mere one passenger, who would vote the other way.

(The five would vote that way, too, in the case where Paul owns the car. But, taking a vote to deprive Paul of his property, life or assets against his will is not in sync with our legal system).

No question: These and other ethical dilemmas will arise in an era when computer programs determine life or death. And they will with driverless cars. But, as we’ve reasoned above, there do seem to be morally superior solutions, even though there are only bad choices.

Still, for my money, I’ll live with those bad choices instead of 35,000 fatal human errors.

Article written by: Charles Bogle 3.0.  Submitted 2/27/17

Comments & thoughts to: cb@dryve.com

 

What do you think?

Please wait...

Register for FREE Dryve. Our Driverless Car Updates.

Wondering what your peers are driving? Discover which car companies most plan on investing in (hint: it’s not what you think). Also see how much time is spent on R&D and much more.
Join more than 540,000 of you peers !