How does your self-driving car handle the Trolley Problem?

In the field of ethics, you often hear discussion of “The Trolley Problem“, a fictional scenario where a person is forced to chose the best outcome from a situation where at least one person is guaranteed to die. I stumbled across this web article which makes the case that the self driving Google car could bring the trolley problem to reality. If you car is headed into a crash should it sacrifice you to save bystanders?

How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

I would offer an additional moral question. If my car decides to sacrifice me can it be programmed to quickly and painlessly kill me as opposed to leaving me to the destruction of a car accident?

Leave a Reply

Your email address will not be published. Required fields are marked *