• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

"Psychological roadblocks" for self-driving cars

MIT News

3 Questions: The trust gap between people and autonomous vehicles. MIT News talks with Iyad Rahwan, who heads the Scalable Cooperation research group.

By Peter Dizikes | MIT News Office

This summer, a survey released by the American Automobile Association showed that 78 percent of Americans feared riding in a self-driving car, with just 19 percent trusting the technology. What might it take to alter public opinion on the issue? Iyad Rahwan, the AT&T Career Development Professor in the MIT Media Lab, has studied the issue at length, and, along with Jean-Francois Bonnefon of the Toulouse School of Economics and Azim Shariff of the University of California at Irvine, has authored a new commentary on the subject, titled, “Psychological roadblocks to the adoption of self-driving vehicles,” published today in Nature Human Behavior. Rahwan spoke to MIT News about the hurdles automakers face if they want greater public buy-in for autonomous vehicles.

Q: Your new paper states that when it comes to autonomous vehicles, trust “will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.” Why is this?

A: It’s a new kind of agent in the world. We’ve always built tools and had to trust that technology will function in the way it was intended. We’ve had to trust that the materials are reliable and don’t have health hazards, and that there are consumer protection entities that promote the interests of consumers. But these are passive products that we choose to use. For the first time in history we are building objects that are proactive and have autonomy and are even adaptive. They are learning behaviors that may be different from the ones they were originally programmed for. We don’t really know how to get people to trust such entities, because humans don’t have mental models of what these entities are, what they’re capable of, how they learn.

Before we can trust machines like autonomous vehicles, we have a number of challenges. The first is technical: the challenge of building an AI [artificial intelligence] system that can drive a car. The second is legal and regulatory: Who is liable for different kinds of faults? A third class of challenges is psychological. Unless people are comfortable putting their lives in the hands of AI, then none of this will matter. People won’t buy the product, the economics won’t work, and that’s the end of the story. What we’re trying to highlight in this paper is that these psychological challenges have to be taken seriously, even if [people] are irrational in the way they assess risk, even if the technology is safe and the legal framework is reliable.

Q: What are the specific psychological issues people have with autonomous vehicles?

A: We classify three psychological challenges that we think are fairly big. One of them is dilemmas: A lot of people are concerned about how autonomous vehicles will resolve ethical dilemmas. How will they decide, for example, whether to prioritize safety for the passenger or safety for pedestrians? Should this influence the way in which the car makes a decision about relative risk? And what we’re finding is that people have an idea about how to solve this dilemma: The car should just minimize harm. But the problem is that people are not willing to buy such cars, because they want to buy cars that will always prioritize themselves.

A second one is that people don’t always reason about risk in an unbiased way. People may overplay the risk of dying in a car crash caused by an autonomous vehicle even if autonomous vehicles are, on the average, safer. We’ve seen this kind of overreaction in other fields. Many people are afraid of flying even though you’re incredibly less likely to die from a plane crash than a car crash. So people don’t always reason about risk.

The third class of psychological challenges is this idea that we don’t always have transparency about what the car is thinking and why it’s doing what it’s doing. The carmaker has better knowledge of what the car thinks and how it behaves … which makes it more difficult for people to predict the behavior of autonomous vehicles, which can also dimish trust. One of the preconditions of trust is predictability: If I can trust that you will behave in a particular way, I can behave according to that expectation.

Q: In the paper you state that autonomous vehicles are better depicted “as being perfected, not as perfect.” In essence, is that your advice to the auto industry?

A: Yes, I think setting up very high expectations can be a recipe for disaster, because if you overpromise and underdeliver, you get in trouble. That is not to say that we should underpromise. We should just be a bit realistic about what we promise. If the promise is an improvement on the current status quo, that is, a reduction in risk to everyone, both pedestrians as well as passengers in cars, that’s an admirable goal. Even if we achieve it in a small way, that’s already progress that we should take seriously. I think being transparent about that, and being transparent about the progress being made toward that goal, is crucial.


Related Content