Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Mark's avatar
  2. Fool's avatar
  3. Santa Monica's avatar
  4. Charles Bakker's avatar
  5. Matty Silverstein's avatar
  6. Jason's avatar
  7. Nathan Meyvis's avatar

Ethics of self-driving cars

I get worried when philosophers get involved in stuff like this.  No one will buy a self-driving car that will sacrifice the driver to save others.  Full stop.  If the law mandates that self-driving cars do so, people will not use them.  Self-driving cars will simply have to mimic the self-preservation instincts of drivers.  Perhaps there are limits to those, but I think we probably need empirical and psychological information about behaviors, not philosophical reflections.

Am I wrong?  If so, how?

(Thanks to Robert McGarvey for the pointer.)

Leave a Reply to harry b Cancel reply

Your email address will not be published. Required fields are marked *

9 responses to “Ethics of self-driving cars”

  1. I am teaching a class on ethical theory, with a focus on autonomous vehicles, this semester and next semester. People already invest different amounts of money in their own safety, according to what kind of car or SUV they drive, and what safety features they have. I imagine that insurance policies would be priced according to the programming of the vehicle. The vehicles programmed to protect occupant at all costs would pay the highest rates. The vehicles programmed to make utilitarian decisions that minimized damage over-all would pay the lowest rates.

  2. My understanding is that most people who think seriously about self-driving cars don't think they will be widely _individually_ owned at all, but rather that they will work as "fleets" that people will buy subscriptions to – you will have the ability to summon one to do your bidding, but you won't have it or use it all the time in the way that people do cars now. (This is supposed to be part of their advantage.) I have no idea if that's likely or not, but if so, then it seems massively more likely to me that algorithms that work in fairly strictly "utilitarian" ways will just be put in as part of the regularly framework, and that people will mostly either not know, or not notice, or just ignore it. This may not be so for the first self-driving cars that are privately owned, and which help develop the technology. But, I'd be surprised if, as these become common, the same sort of cost-benefit structure that governs (largely rightly) risk in most of our lives doesn't control the regulatory infrastructure for subscription based services, if those in fact develop.

  3. People have vaccinations that put them at (a tiny) risk, as long as enough other people have the same vaccinations that, overall, they are safer. And lots of people (outside the US) would vote for laws making such vaccinations mandatory. There's a big transition problem, for sure. But doesn't this problem have the same structure? (Just as you wonder if you're missing something, I wonder if I'm missing something).

    On the philosophical reflections. Frankly, at some level I don't think the philosophical detail needed to think about these issues would strike many philosophers as complex or difficult. That doesn't mean that engineers and managers can come up with them by themselves. I'm often struck when I work with very smart educational leaders, and people on the business side of the university, how valuable they seem to find distinctions that seem, to philosophers, fairly mundane.

  4. Here's empirical evidence that basically just bears out what you've said: http://science.sciencemag.org/content/352/6293/1573

    Additionally, Walter Sinnott-Armstrong is involved in some empirical work that also supports the idea that people subscribe to "But Autonomous Vehicles as I Say, Not as I Do." They report preferences for self-sacrificing vehicles but would never buy one themselves. Shocking, truly.

  5. Just highlighting Matt's point above: the obvious goal is to have a system of all autonomous cars, maximally energy efficient, and interconnected. At that point, there will be very few crashes and casualties and people will accept a (mostly) utilitarian set up. The difficult questions are what to do in the transition to that goal, while humans are still in control of some cars. I think we could convince people to purchase autonomous cars with programs, whose (otherwise mostly hidden) details are available to legislators and insurance companies, designed to protect the passengers from human drivers and the errors they will make (so non-utilitarian in those cases), but also to protect (within reason) innocent people (e.g., pedestrians) even if it puts the passengers at risk (so, more utilitarian about those decisions). We already instinctively put ourselves at great risk to avoid hitting people (or deer or squirrels), so maybe we can frame it as the cars being programmed to do some of that but not as stupidly as we do it instinctively…

  6. In a situation where someone must get hurt, autonomous vehicles should have a preference for those persons/vehicles who are behaving most safely/following the law, followed by one for self-preservation, and followed by one for utilitarianism.

    It strikes me that this order better allocates economic liability than a pure utilitarian concept. The greatest beneficiary of the utilitarian mandate would seem to be the person driving, by hand and quite recklessly, an unsafe car full of people. That seems exactly like the behavior we should not reward. There is a class issue–I would imagine the person driving that car might not be able to afford a safe autonomous model–but that would seem better addressed by targeted subsidies rather than the regulatory structure of autonomous vehicles.

    We'd even have a misallocation of costs in legal controversies–if your relative dies when her Google car crashes, Google will have the option to make a defense based on why the car's seemingly irrational behavior in fact best fulfilled the utilitarian legal mandate. How readily are you actually going to be able to disprove Google's view? You'd have to hire an expensive expert and hope for the best. After all, I doubt we'll have a utilitarian mandate so well-defined that the answer would ever be clear.

  7. 95% of accidents (which kill about 30,000 people per year in the US, many more in low income countries) are due to driver error. Let the robot cars be selfish. It will eliminate almost all fatality on the road. As you say, people will not get in the cars if they think they will be dumped. The most important thing is that we get human hands off the wheels. Not that we teach the robots to avoid the school kids and drive us off the bridge.

  8. You'd be surprised what people will do and/or can be enticed into doing by clever and creative designers.

    Tons of empirical research shows that people will get into and feel comfortable in a self-driving as long as they are made to feel as though the car cares about them and is to some degree responsive to them.

    People definitely do want to have some feeling of control. The first thing many people do in a self-driving car is issue it various commands, to adjust this or that. They seem to want to stablish that they are to some degree in control of the car. The dirty little secret is that people can be given that "in control" feeling without actually having much control at all.

    And as others said above, the engineers who are working on these things are working on simply reducing the number of morally problematic situations to as close to zero as they possibly can. Perhaps they will never eliminate all such situations, but they will come pretty close to making this a non-issue. At least that's what they believe.

    Right now on a straight highway, with no pedestrians, and no mountains, filled with self-driving cars alone, you can already get accidents down to almost zero. This to suggest to some that a transitional stage should be one where there are self-driving car lanes in which no human drivers are allowed on the highways.

    The problem for self-driving cars at present are cityscapes, where pedestrians lurk around every corner. Plus snow, mountainous roads are a problem. That has to be ironed out before trucking is given over to the computers. But the day is coming.

    You can bet that the day is coming when self driving cars will eventually take over completely from humans. Just as technology has made airlines much much safer than they were originally, so too with cars. The carnage caused by humans behind wheels is far far too great to keep them there. And you probably won't own your own self-driving car. Cars mostly sit. No need to own a car that mostly sits when you can get a self-driving car to pick you up and take you where you want to go for a pittance.

    And again, re people not getting into them willingly … designers aren't dumb. They know what makes humans click. They know how to get around initial human reticence. And they are working like crazy on that problem, just down the street from me, actually.

    Here's a a link to a video of a live episode of Philosophy Talk on the topic.

    https://www.philosophytalk.org/driverless-cars-moral-crossroads-live#

  9. Wesley Buckwalter

    Thanks for posting this. My understanding is that from an engineering standpoint this is mostly a non issue (i.e. the answer in high speed crashes is apply brakes and is basically never swerve). So while it might be interesting (to some) to recast trolly problems with cars, there's a lot more pressing realistic ethical questions about these cars and the social impact of the industry, by comparison.

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress