Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

A computer scientist wonders: where are the philosophers discussing AI?

He's aware of many of them, but still raises some interesting questions.  Can we design artificial intelligence capable of what Dopey Donald Chump pulled off in America? 

Nick Bostrom's stuff strikes me as a bit silly, but perhaps I've missed something:  couldn't we just pull the plug, literally?   Thoughts from those well-informed?

Leave a Reply to Sean Matthews Cancel reply

Your email address will not be published. Required fields are marked *

8 responses to “A computer scientist wonders: where are the philosophers discussing AI?”

  1. david chalmers

    the issues are serious. bostrom's work is important, and a number of other philosophers have been writing on these issues in recent years (my own take is at http://consc.net/papers/singularity.pdf ). we're holding a major conference on "the ethics of artificial intelligence" at NYU on october 14-15 this year (website coming soon), with a number of leading philosophers and AI researchers.

    BL COMMENT: Could you or someone else explain why they are serious?

  2. david chalmers

    see p. 4 of the article linked above.

  3. Your discussion of the obstacles to the possible "singularity" is illuminating, and articulates usefully why one might not think this is a very serious concern. In any case, I encourage those interested to read Chalmers's discussion.

  4. Pulling the plug is the obvious thing to do, but what if systems have become so integrated that if you pull the plug on the one bit that is causing trouble, that messes up the systems that supply drinking water and run sewage systems, get food to supermarkets, supply electricity to homes, hospitals and factories, or something equally important?

    At the (I hope) fantasy end of the spectrum is the system that deliberately gets integrated in that kind of way, in order to deter human beings from pulling any plugs, for their own good. But even leaving that aside, integration could mean that there would be side-effects to pulling plugs – compare how easy it is to mess up an ecosystem by destroying what looks like a pest.

    Keeping things separate and tightly controlling their communications with one another may be a recipe for inefficiency: the prospective clever systems may see ways to make things more efficient by integration which have escaped the notice of human programmers. But separation may be a useful precaution. Me, I'm not letting my toaster join the internet of things, nor linking a payment card to a cellphone to use Android Pay or anything similar.

  5. There is this seminar at Dagstuhl next week: http://www.dagstuhl.de/program/calendar/partlist/?semnr=16222&SUOG.

    And there was this workshop in Utrecht last month: https://www.projects.science.uu.nl/reins/?tribe_events=responsible-intelligent-systems-in-perspective-where-computer-science-philosophy-and-legal-theory-meet.

    Both featuring some but probably not enough philosophers.

  6. Bostrom's concern isn't with plugs, but with corks, specifically the risk of being talked into pulling one (in the 1001 Nights sense of being tricked into letting the demon out of the bottle).

  7. How do you program motivated spontaneity? Even berserkers fueled by amanita muscaria required that.

  8. Here's Bostrom's answer to the 'couldn't we just pull the plug?' question on the OUP blog (http://blog.oup.com/2014/09/interview-nick-bostrom-superintelligence/):

    "It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?

    "A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?"

    We're the ones who are developing AI, so whether those systems are safe ultimately depends on how we design them. Bostrom's argument isn't that this is hopeless, just that it's more difficult than it intuitively sounds.

    Intuitively, 'build in a shut-down button' or 'keep the computer air-gapped' sounds simple. The history of computer security, however, shows that intelligent adversaries can very often find clever loopholes around safety measures, especially with new and relatively untested systems. Even when attackers are merely human-level, "a dollar of offense beats a dollar of defense" pretty routinely (https://threatpost.com/defenders-need-to-embrace-offensive-security-skillsets/117255/). A superintelligent attacker could exhibit vastly more ingenuity, and any interaction we have with the system is a potential failure point. (We can also expect there to be strong economic incentives to cut corners on safety, especially if a race dynamic develops.)

    Bostrom's proposal is that we find ways to avoid adversarial scenarios altogether, by designing AI systems to learn correct goals/values. If the system isn't actively trying to subvert your safety measures, then you can be much more optimistic that measures designed at or below the human level will be robust to big capability gains. This is also a pretty intuitive proposal, but not a lot of research hours has gone into it as yet — there are some massive unanswered questions here, both philosophical and technical, that don't arise in other learning problems. (http://intelligence.org/files/ValueLearningProblem.pdf lists a handful.)

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress