People are talking all over (in my small world, at least) about the ethics of self-driving cars. Self-driving cars inherently face ethical decisions. Cars must be programmed to make these ethical decisions, trading off different costs including the relative value of people's lives. The MIT Technology Review asks:
How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)
The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?
That's an interesting discussion and I'm glad people are having it. The article highlights some work using experiments to try to understand how people would make these trade-offs.
I'm not an ethicist, nor a computer program, so I don't have much to add to those conversations. But I'm an economist and
know a little about spend time thinking about markets. What interests me about the situation is how markets might arise between different ethical systems. People will be able to choose (and pay for) software that makes different ethical decisions for them. Have we seen that before?
The beauty about a market in ethical systems is the same as the beauty of a market in other issues. Markets won't rely on the research of economists, but the choices of individuals who have to make these trade-offs. The decisions of disparate people are given a voice through the market.
(I doubt that governments will let such markets work for long before mandating certain ethics, but indulge me for a moment that governments allow markets to operate. A later post might deal with how government would mandate a certain ethical system.)
Imagine Larry, animal lover. He has a real fondness for all animals, especially large animals: deer, moose, rhinos, whatever. He believes it is wrong to harm these animals. When given the choice between a 100% chance of killing an endangered rhino and a 0.1% chance of killing a person, he'd save the endangered rhino. That's the ethical trade-off that he chooses for himself.
Now Larry's friend, Sarah, doesn't care for animals as much. She puts more value on saving the person and in this situation would choose to save the person. In a world without markets, what are self-driving cars to do?
That's where the beauty of markets come in. Markets developed to coordinate marginal trade-offs across people. Markets in ethics (as in groceries) give the consumers more options between goods. If Larry and Sarah want different software and are willing to pay the different costs of making and using that software, they can pick out the feature of their car, just like they pick out leather seats.
Now that seems odd. People can choose among ethics at the same level as the type of seat? We don't often make such obviously ethical decisions, but who knows about in the future? We have markets now in ethical decisions that would have seemed odd to our ancestors: free-range beef, "fair" trade coffee, environmentally friendly. People can pay for their different preferences. Self-driving cars seem a little different, because the decision is so explicitly ethical. The owner gives specific weights to the value of lives, animals, and property. Just because it is explicit doesn't mean it is fundamentally different than the market for grass-fed beef. Markets would still work to coordinate people's choices in this market.
If people have expensive tastes, like they want to avoid every squirrel on the road, they must pay for that through better, more expensive technology. Each individual would be able to make such trade-offs for himself. Markets do that.
People, whether they realize it or not, will explicitly buy a utilitarian
Maybe people will not want to make these decisions and they will simply take whatever Google or Uber gives them. But maybe not. Maybe vegans will form a strong enough group to have a market that values animals highly. A beauty of markets is that no one needs to know in advance. The market will develop spontaneously as people learn more.
Of course, markets do not operate in a vacuum. All markets function in a system of more or less protected property rights. How property rights come to be defined in the self-driving car market will have huge implications for how the market develops. Particularly, liability laws will change the marginal trade-offs people face.
Take an extreme example where owners of self-driving cars have zero responsibility for the damage that their car causes. In that property right system, any marginal trade-off that the driver is facing tilts toward protecting the driver. This is Econ 101. Since he does not pay anything if he imposes a cost on other people (by hitting them), he will hit choose a software/ethics that has a higher chance of hitting people outside of his car to protect him inside. I doubt many people hope for that property right structure to develop.
A more realistic example would involve liability similar to what drivers face today. Drivers have to pay some amount to hurt victims or damaged property. If liability develops in a successful way, we return to a situation where people have to pay people for costs the car imposes on outsiders.
In the ideal liability world, this might greatly simplify the calculations for the software. The software can simply choose the route that minimizes expect costs, which include costs to the driver, vehicle, and liability.
Where exactly these calculations will go, I have no idea. But I'm excited to see how it develops, if the state allows for a market to develop which would allow people to "vote" for what they value through what they buy.
How do you think a market might develop? Let me know in the comments.