Isn’t it amazing to have self driving cars? While traveling you don’t need to look at the road, enjoy chatting or play games on your cell phone. Even after successful experiments self driving cars are nowhere. WHY?
Here is a scenario. You are driving a car and suddenly the breaks fail. Now you are in a situation where you can either hit the concrete barrier or hit the pedestrian walking on road. What will you do? What are your moral values?
Is a human’s life more significant than infrastructure? Are the elderly worth not as much as the youthful? Is a female life worth more than a male’s? There’s truly no correct response to any of these but these moral values are useful in determining how the car should respond to these once-in-several-lifetimes occurrences.
These scenarios are derived from the trolley problem. If you are unfamiliar the Trolley Problem is as follows:
There is a runaway trolley sliding down the railway tracks. Tied up in a state of immobility are 5 people on that very track. The trolley is headed in their direction. You are standing nearby, close to the lever that can divert the trolley to another set of tracks. The catch is that on the other track there is also one person. You have two options:
- Do absolutely nothing, allowing the trolley to kill the 5 people on the primary track.
- Divert the trolley on the other track and kill 1 person.
Which is the most ethical choice?
This is a simple question without an easy answer.
At MIT Moral Machine, common people are given a chance to judge scenarios such as the one given above and make a call. The data gathered from this is then being used to help programmers train self-driving cars in how to handle these tricky situations.
However, no amount of data can satisfy the minds of the people who see the self driving car as a machine that is capable of killing living beings.
And that my friends is a problem technology really can’t solve! For this reason, it might take longer than expected for self driving cars to finally grace our roads.