Self-driving Cars Already Making Life and Death Decisions
Autonomous vehicles (AV) are already making profound choices about whose lives matter, according to experts.
"Every time the car makes a complex maneuver, it is implicitly making a trade-off in terms of risks to different parties," said Iyad Rahwan, MIT Cognitive Scientist. The most well-known issues in AV ethics are "trolly problems"—moral questions dating back to the era of trollies that ask whose lives should be sacrificed in an unavoidable crash. For instance, if a person falls onto the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian, what should it do?
Rahwan and colleagues have studied what humans consider the moral action in no-win scenarios. While human-sacrifice scenarios are only hypothetical for now, Rahwan and others say they would inevitably come up in a world full of AVs. Human drivers can answer ethical questions big and small by using intuition, but it’s not that simple for artificial intelligence. AV programmers must either define explicit rules for each of these situations or rely on general driving rules and hope things work out. Even if programmers choose to keep things vague, a pattern of behavior will be discernible in some instances or in overall statistics.
The National Highway Traffic Safety Administration (NHTSA) said in a September 2016 report that "manufacturers and other entities, working cooperatively with regulators and other stakeholders (e.g., drivers, passengers, and vulnerable road users) should address these situations to ensure that such ethical judgments and decisions are made consciously and intentionally."
NAFA Fleet Management Association
http://www.nafa.org/