Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

Ethical issues raised by self-driving cars

This is interesting, and far more real than fantasies about our robot overlords (as opposed to our actual capitalist overlords).   What do readers know about these developments?  (Thanks to Phil Gasper for the pointer.)

Leave a Reply to Chris Buford Cancel reply

Your email address will not be published. Required fields are marked *

6 responses to “Ethical issues raised by self-driving cars”

  1. Apparently some have already solved the dilemma.

    http://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/

  2. Also interesting: http://moralmachine.mit.edu/

  3. The unintelligibility of the errors self-driving cars make seems to be a big obstacle to developing the kind of trust you'd need for widespread adoption. We tend to forgive human errors which lead to accidents because we understand things like getting distracted, not seeing something, or falling asleep at the wheel. But when a self-driving car runs into the broadside of a truck that was in plain sight at high speeds without even slowing down, it makes you suspicious of their capacities. You think, at least with those human errors I can control many of them. How do I control, or even begin to predict, those possible situations in which my self-driving car is going to screw up? Chances are it will be in some of the places I least expect, and that will be a huge hurdle for people to get over before they put their lives in the hands of this new technology. There's a Rahwan quote in there where he blames the issue on our failure to "have mental models of what machines can and cannot do" but I'm not sure that (A) that's the problem, or that (B) it would be reasonable expectation to have of us.

    Toyota's legal team is also struggling with questions like, design the car to go over the speed limit when all the other cars on the highway are? It would probably be safer to drive with the flow of traffic, but there's no plausible deniability when you explicitly designed the thing to exceed the speed limit. Tricky issues.

  4. I've been working in this exact space for several years, including with Stanford and major technology developers, e.g., Google, Apple, Daimler, Tesla, and others. There's a lot more to say than in this (lightweight) article or in the MIT link above (funny how engineers who dabble in ethics are often unaware of the large amount of work already done in ethics; this happens way too much).

    For instance, in the link below, check out my ethics chapter for Daimler, or any number of contributed media articles, e.g., in Wired and The Atlantic. The other link is to a short TED-Ed animated video I wrote, with about 500,000 views; there's growing attention on this subject.

    Stay tuned for more: We'll have another book, "Robot Ethics 2.0", coming out this summer with Oxford University Press that includes several new chapters from other philosophers on ethics and autonomous driving.

    http://ethics.calpoly.edu/robots.htm

    http://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin

  5. Legality: Indemnify the autonomous vehicle industry like nuclear power. Ban all torts. Not only make corporations persons, but give them moral priority over humans, like some creditors have strong priority over others. The autonomous vehicles have plenty of legal representation, if all are fleet cars, given as expensive as they will be, and the rapaciousness of Uber to get rid of their budget un-employees and the continuing desire to eliminate the Teamsters. Also make gasoline $100 per gallon to get the other cars off the street.

    Practicality: I posed the question, of legal liability of AI, at the 1984 Denver convention on AI … and all I got was crickets, not even a stupid t-shirt. Philosophers more than technophiles, should realize what kind of scam this is. Either the autonomous vehicles are driven in mass by a single program, as slave units, with no regular vehicles on the road (aka a tram system) or it won't work at all. At best you could have a tele-operative system (as was the case with the Google car that ran a red light recently) .. by hiring Chinese people to drive our cars, but not in person, but via satellite from China .. when they aren't on-shift producing iPhones at FoxCon.

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress