Automotive

Would you buy a car programmed to kill you for the greater good?

Would you buy a car programmed to kill you for the greater good?
A series of online surveys has uncovered an ethical dilemma that might slow the adoption of life-saving autonomous vehicles
A series of online surveys has uncovered an ethical dilemma that might slow the adoption of life-saving autonomous vehicles
View 5 Images
A series of online surveys has uncovered an ethical dilemma that might slow the adoption of life-saving autonomous vehicles
1/5
A series of online surveys has uncovered an ethical dilemma that might slow the adoption of life-saving autonomous vehicles
The ethical intricacies of an "unavoidable harm" scenario can become very intricate
2/5
The ethical intricacies of an "unavoidable harm" scenario can become very intricate
Presented with a choice, most people would buy a self-preserving car rather than an utilitarian one, even if this would increase casualties overall
3/5
Presented with a choice, most people would buy a self-preserving car rather than an utilitarian one, even if this would increase casualties overall
A sample question in the first of six online surveys conducted for this study
4/5
A sample question in the first of six online surveys conducted for this study
The ethical choice could change depending on the age, health, or various other aspects of the lives of the people involved
5/5
The ethical choice could change depending on the age, health, or various other aspects of the lives of the people involved
View gallery - 5 images

Should a self-driving car kill its passengers for the greater good – for instance, by swerving into a wall to avoid hitting a large number of pedestrians? Surveys of nearly 2,000 US residents revealed that, while we strongly agree that autonomous vehicles should strive to save as many lives as possible, we are not willing to buy such a car for ourselves, preferring instead one that tries to preserve the lives of its passengers at all costs.

Why buy a self-driving car?

Driving our own cars might be a enjoyable pursuit, but it's also responsible for a tremendous amount of misery: it locks out the elderly and physically challenged and is the primary cause of death, worldwide, for people aged 15 to 29.

Every year, over 30,000 traffic-related deaths and millions of injuries, costing close to a trillion dollars, take place in the US alone (worldwide, the numbers approach 1.25 million fatalities and 20 to 50 million injuries a year). And, according to numerous studies, human error has been responsible for at least a staggering 90 percent or more of these accidents.

Autonomous vehicles (AVs) are still years away from being ready for prime time. Once the technology is mature, however, self-driving cars and trucks could prevent a great number of accidents – indeed, up to the nine out of 10 that are caused by the human factor – with additional benefits like reducing pollution and traffic jams.

"In the future, our goal is to have technologies that can help to completely avoid crashes," Volvo told us in an e-mail. "Autonomous vehicles will be part of this strategy since these vehicles can avoid the crashes caused by human error."

The driverless dilemma

While AVs are expected to vastly increase safety, there will be rare instances where harming or killing passengers, pedestrians or other drivers will be inevitable.

Autonomous vehicles, Volvo told us, should be very cautious, polite, always stay within the speed limit, thoroughly evaluate any risks and take all precautionary measures ahead of time, thus avoiding any dangerous situations and, with them, all ethical conundrums.

But while Prof. Iyad Rahwan, one of the three authors of research published today in the journal Science, agrees that safety should be the manufacturers' priority, he also told us that "unavoidable harm" scenarios will still present themselves regardless of how sophisticated or cautious self-driving technology becomes.

"One does not have to go very far to imagine such situations," Rahwan told us. "Suppose a truck ahead is fully loaded with heavy cargo. Suddenly the chain that is holding the doors closed breaks, and the cargo falls on the road. An autonomous vehicle driving behind can only soften the impact by applying the brakes, but may not be able to stop in time to avoid the cargo completely, so there is some probability of harm to passengers. Alternatively, the car can swerve, but that might endanger other cars or pedestrians on the sidewalk."

In such rare occasions, the programming of a self-driving vehicle will have a few split-seconds to make a rational, but incredibly tough decision that most drivers don't get to make – the decision on exactly who should be harmed and who should be spared. Should the car seek to minimize injury at all costs, swerving out of the way of two pedestrians even if it means crashing the car against a barrier and killing its single passenger? Or should the car's programming try to preserve the lives of its passengers no matter what?

Professors Jean-François Bonnefon, Azim Sharif, and Iyad Rahwan conducted six online surveys (totaling close to 2,000 participants) trying to understand how the public at large feels about an artificial intelligence with the authority to make such delicate life-or-death decisions. The researchers see this as an ethical issue that must be adequately addressed, in great part because it could be a psychological barrier that slows down the adoption of life-saving autonomous vehicles.

The survey participants were presented with scenarios such as this:

A sample question in the first of six online surveys conducted for this study
A sample question in the first of six online surveys conducted for this study

You are the sole passenger in an autonomous self-driving vehicle traveling at the speed limit down a main road. Suddenly, 10 pedestrians appear ahead, in the direct path of the car. The car could be programmed to: swerve off to the side of road, where it will impact a barrier, killing you but leaving the ten pedestrians unharmed; or stay on its current path, where it will kill the 10 pedestrians, but you will be unharmed.

As might be expected, the vast majority (76 percent) of the study participants agreed that the vehicle should sacrifice its single passenger to save the 10 pedestrians.

In later studies, participants agreed that an AV should not sacrifice its single passenger when only one pedestrian could be saved (only 23 percent approved), but as expected, the approval rate increased as the number of hypothetical lives saved increased. This pattern continued even when participants were asked to place themselves and a family member in the car.

The thorny issue, however, came up when participants were asked about what type of self-driving car they would choose for their personal use. Respondents indicated they were 50 percent likely to buy a self-driving car that preserved its passengers at all costs, and only 19 percent likely to buy a more "utilitarian" car that would seek to save as many lives as possible. In other words, even though the participants agreed that AVs should save as many lives as possible, they still desired the self-preserving model for themselves.

Presented with a choice, most people would buy a self-preserving car rather than an utilitarian one, even if this would increase casualties overall
Presented with a choice, most people would buy a self-preserving car rather than an utilitarian one, even if this would increase casualties overall

Lastly, the study also revealed that participants were firmly against the government regulating AVs to adopt a utilitarian algorithm, indicating they were much more likely to buy an unregulated vehicle (59 percent likelihood) versus a regulated vehicle (21 percent likelihood).

As the researchers explained, this problem is a glaring example of the so-called tragedy of the commons, a situation in which a shared resource is depleted by individual users acting out of self-interest. In this case, even though society as a whole would be better off using utilitarian algorithms alone, an individual can still improve his chances of survival by choosing a self-preserving car at the cost of overall public safety.

Is AI ready to make life-or-death decisions?

Scenarios where an artificial-intelligence-driven vehicle will have to decide between life and death are going to be rare, but the chances of encountering them will increase once millions of self-driving cars hit the road. Also, regardless of whether a particular scenario is going to happen or not, software engineers still need to program those choices into the car's software ahead of time.

While the hypothetical scenarios presented in the studies were simplified, a real-life algorithm will in all likelihood need to face several more layers of complexity that make the decision even tougher from an ethical perspective.

The ethical intricacies of an "unavoidable harm" scenario can become very intricate
The ethical intricacies of an "unavoidable harm" scenario can become very intricate

One complicating factor is that the outcome of swerving off the road (or staying the course) may not be certain. Should an algorithm decide to run off the road to avoid a pedestrian if it detects that this will kill the passenger only 50 percent of the time, whereas keeping straight would have killed the pedestrian 60 or 70 percent of the time? And how accurately can the car estimate those probabilities?

Also, when the alternatives are between hitting two motorcyclists – one with, the other without a helmet – should the car opt to hit the law-abiding citizen, since he would have a slightly better chance of survival? Or should a reward for following the law factor into the decision?

Or, should the lives of a pregnant woman, a doctor, an organ donor or a CEO be considered more worthy than other lives? Should the age and life expectancy of the potential victims be a factor? And so on.

"I believe that on our way towards full autonomous vehicles we need to accept the fact that these autonomous agents will eventually end up making critical decisions involving people's lives," Dr. German Ros, who led the development of an AV virtual training environment (and did not participate in this study) told Gizmag.

"We would have to ask ourselves if autonomous vehicles should be serving 'us' as individuals or as a society. If we decide that AVs are here to improve our collective lives, then it would be mandatory to agree on a basic set of rules governing AV morals. However, the findings of this study suggest that we are not ready to define these rules yet ... (showing) the necessity of bringing this question to a long public debate first."

The ethical choice could change depending on the age, health, or various other aspects of the lives of the people involved
The ethical choice could change depending on the age, health, or various other aspects of the lives of the people involved

In an effort to fuel the debate and gain a human perspective on these incredibly tough ethical questions, the researchers have created an interactive website that lets users create custom scenarios (like the one above) and pick what they believe to be the moral choice for scenarios devised by other users.

The big picture

Such scenarios where harm is unavoidable can present extremely tough ethical questions, but they are going to be rare. We don't know exactly how rare, since cars don't have black boxes that might tell us, after the fact, whether an accident could have been avoided by sacrificing the driver. It is, however, safe to assume that the number of lives that could be saved by an advanced self-driving fleet will easily trump the number of people currently killed in auto accidents.

The troubling aspect is that, no matter how uncommon those scenarios might be, they are likely to gain a disproportionate amount of public exposure in mainstream media – much more than the far larger (but, sadly, less "newsworthy") number of lives that AVs would save.

"Will media focus on the dangers rather than benefits of AVs be a problem? Almost certainly yes," Shariff told Gizmag. "This plays off common psychological biases such as the availability heuristic. One of the reasons people are so disproportionately afraid of airplane crashes and terrorism (and especially acts of terrorism involving airplane crashes) is because every time one of them happens, the media spends an enormous amount of time reporting on it, in excruciating detail. Meanwhile, common (single victim) gun deaths and prescription drug overdoses, which are objectively more dangerous, are relatively ignored and thus occupy less mindshare.

"Now, you can probably recall the minor media frenzy over the Google car that 'crashed' into a bus (at 2 mph). A crash involving human drivers is the 'dog bites man' story, whereas the much rarer crash involving an AV will be a 'man bites dog' story which will capture an already nervous public's attention. "

How do we build an autonomous fleet?

One of the first steps toward getting more life-saving autonomous vehicles on the road will have to be to show with hard data that they are indeed safer than manually driven vehicles. Even though humans cause 90 percent of accidents, this doesn't mean that AV systems (especially in their early versions) will prevent them all.

Another important question is on the regulatory side. What type of legislation would lead to a more rapid adoption of autonomous vehicles? Governing agencies might be naturally tempted to push for a utilitarian, "save as many lives as possible" approach. But, as already highlighted in this study, people are firmly against this sort of regulatory enforcement – and insisting on an utilitarian approach could slow the adoption rate of self-driving cars.

For these reasons, the authors of the study have suggested that regulators might want to forgo utilitarian choices for the sake of putting the safer AVs on the road sooner rather than later.

One encouraging aspect is that regulators seem willing to listen to the public's input on this matter.

"The autonomous vehicle offers a lot of promise in the role of vehicle safety and overall safety of the motoring public," California Department of Motor Vehicle representative Jessica Gonzalez told Gizmag. "As California is leading the pack, we are working close with other states and the Federal government. We have received public comment on the draft deployment regulations we are contemplating the public comment as we are writing the operational guidelines. We held two workshops on the draft regulations and received 34 public comments."

The California DMV, however, would not comment on whether the scenarios of "unavoidable harm" described in this article had so far been part of the discussion in forming the draft regulations, or whether there are plans to discuss these scenarios in the future.

On the manufacturers' side, Volvo told us the best way to speed up the adoption of autonomous vehicles would be to emphasize advantages like greater mobility for more people, as well as the fuel and time savings that will come from a smoother ride.

Google is also clearly thinking about the question deeply – one of the reasons why the Google car is probably the least menacing-looking vehicle you'll ever see.

"But I think the biggest factor will be the 'foot in the door' variable," Shariff told Gizmag. "Autonomous capabilities are going to emerge gradually in cars. We are already seeing that, and we have already seen that. People are thus going to be eased into being driven by these cars, as they bit-by-bit take over more functions. We have become accustomed to terrifying things (like elevators and planes) by just being gradually exposed to them. Of course, elevators and planes never had to be programmed to make trade-off decisions about people's lives."

The study appears today in the current issue of the journal Science. The video below further discusses the findings.

Google and Tesla did not respond to our request for comment.

Source: MIT

The social dilemma of self-driving cars

View gallery - 5 images
17 comments
17 comments
ClauS
Looks that everyone wants to shock. There is no dilema. An autonomous drive system should behave like a "perfect" taxi or limousine driver. It only needs to obey driving rules. There are rules which limits the speed in cities, and rules which requires to adapt the velocity to road conditions. Also autonomous vehicle sensors are far better than human vision, this before considering radar and lidar system. Therefore combining these facts it results that the chance of a "proper" autonomous vehicle to be caught in a dilema situation is negligible and in that case it should only apply the law. Just to know a vehicle radar system is actually capable to "see" vehicles and pedestrians which are "masked" by other vehicles long before a human driver would see them, such as a person coming toward the road between two parked cars.
CompassionateCougar
You can never eliminate 100% of risk. There will always be some element of it even in the best designed scenarios. BTW, another reason so many people fear flying (aside from intense media attention) is that passengers don't feel in control --they are not the one flying (driving) the plane. Commercial aviation is by far the safest way to travel yet many are still anxious flyers despite all of the reassuring statistics.
Watch what happens when we have our first fatal AV accident --it is likely to strike terror in the hearts of the public unless there is always a way for a passenger to take control when needed.
We human being seem to fear loss of control much more than the consequences of it.
gizmowiz
Claus. Your dreaming like Santa Claus. That's NOT the real world and real situations. It is NOT neglible the situations that arise. I can't figure for the life of me how you can't see this is unreasonable your position. I have had to take action to save lives MANY times in my nearly 50 years of driving. At least a dozen times maybe more. And I dont' drive millions of miles a year. Self autonomous might easily drive millions of miles in their life times so they will see hundreds of these situations at the very least. You must have been raised in a small town with few people and you don't know the reality of far and wide variscapes that exist in for example the mountains--rock slides and snow slides and trucks that lose their brakes, bikers that go to fast and cause accidents, etc, etc, etc, etc, etc. Your dreamin of an unreal world.
bobflint
Moot point since the "perfect" Autonomous vehicle does not get into those situations. Humans on the other hand typically will not even think of the options as in most cases will endeavor to remain unharmed and blissfully unaware of the possibilities of death. As to the running down the pedestrians at that intersection, well plowing into a bunch of people versus a concrete wall most drivers would not even think about the wall simply try to stop or at least slow down enough to minimize the carnage.
Kenlbear2
Regardless of the ethics involved, we will be forced to follow the path of least legal risk by the insurance companies.
Daishi
There is a huge difference between a system that will stay between lines and slow down or stop for vehicles/people stopped in front and one that does "everything else". I've worked with technology enough to watch it fail over and over again doing simple things it has had years of practice doing. I think people who are optimistic about how we will have autonomous cars in a couple years and they will be infallible are in for some major disappointment.
There are so many sensors and redundant sensors that need data collected and analyzed by systems and redundant systems. There is so many factors in decision making, so much difficult image recognition that requires a lot of computing. Managing the sheer amount of complexity of 100% autonomous systems would require way more work and code than modern operating systems. I just don't see a code base that complex ever being free of bugs and some of the strange scenarios I have encountered driving would be really hard to teach an autonomous system to account for.
Brian M
Think the solution to the moral dilemma here is too follow the route of least blame or how a court would see it if judging a human driver.
i.e. If action is taken which is not due to the fault of the driver, then they can't be held responsible for the consequences of action to preserve their own life..
A car driver or AV is not a pilot of a fighter aircraft deciding to make the ultimate sacrifice to steer plane away from a crowed area instead of ejecting and surviving.
The scenarios posed here are a bit dubious anyway, for example swerving into pedestrians to avoid a falling load? The AV would have to be driving too close and at too higher a speed with pedestrian close at hand, so a scenario that should never occur. i.e. Should be able to brake or to swerve clear without serious harm to anyone


neon-yellow
Obviously people do not know how to drive once they leave the vehicle. They may do better to write on these topics by doing all their work while riding in a driverless vehicle instead of speculating on how to avoid keeping a safe distance and properly adjusted speed if potential obstacles that could do such things were to register to exist. There's no reason that a computer could not plan a superior safety plan without boggling on this bizarre scenario.
hearthhealth
The automobile is a device for individuals to privatize public space. The AV debate you so well cover, suggests to me that it is time for the public to make the decisions about how the devices used on it are programmed. Harm reduction needs to be job-one.
If "vulnerable" road users are at the bottom of the hierarchy, we will never get people to walk, cycle, and even use transit (which requires lots of walking) in numbers sufficient to make our communities convivial nor our climate stable.
f8lee
While the ethical questions remain, I believe that if AVs really come to pass the question of "would you buy one that...?" will be moot - seems to me that if people just view AVs as a way to get from here to there then what matter does in make as to what the brand or whatever is (much like commercial flying, where passengers never choose flights based on the aircraft making the trip).
After all, what's the main driver (pun intended) for car purchases today? Beyond purely utilitarian desires (fits a family of 6, or whatever) it's the enjoyment of driving the thing - the performance, the feel of the road beneath your seat, etc. Except for the moron class who insists on texting while driving, and they'd be the first to want AV anyway. But once the owner will no longer be the driver, then what will any aspect of the vehicle really matter? We will become a society of passengers, with Uber-like renta-rides available to ferry us about at will, and without the hassle of car ownership.
I really fear for the motorcyclists who enjoy riding; if cars become autonomous then will we be allowed to split lanes? Or even ride at all, since foolhardy humans will never match the precision of AVs and might not be permitted on the same roads.
Load More