Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

A computer that passed the Turing Test?

Story here.  Any readers know more about this?

(Thanks to Michael Bramley for the pointer.)

UPDATE:  David Chalmers (NYU/ANU) writes:

The chatbot didn't pass the Turing test.  In Turing's original article, he predicted that in fifty years, machines would be able to fool 30% of judges into classifying them as human after 5-minute conversations.  The organizers have somehow bamboozled the media into taking this prediction as the criterion for passing the test.  In Turing's original article it's quite clear that it's nothing of the kind.  In fact, in a follow-up discussion he says that he doesn't think the full test will be passed for at least a century. It's also worth noting that the bar has been lowered considerably by having the chatbot pretend to be a 13-year-old with English as a second language.  If we're allowed to lower the bar like this, one can trivially write a Turing-test-passing program whose responses are indistinguishable from a human who is asleep!

Also, comments are now open.

Leave a Reply to Phil Gasper Cancel reply

Your email address will not be published. Required fields are marked *

10 responses to “A computer that passed the Turing Test?”

  1. Eric Schwitzgebel

    [new version with typo corrected]

    Another brilliant takedown here:
    http://www.scottaaronson.com/blog/?p=1858

    This is just some of the most irresponsible science reporting I've ever seen. It's like I replaced the 60 mph sticker on my speedometer with a sticker that says "the speed of light", took my car for a spin up the 101, and the next day's headline read "Speed of Light Exceeded for the First Time".

  2. I haven't seen any chatbots that can do simple natural language inference, e.g. "Is a tall building tall?" Goostman lost a number of times to better chatbots in the Loebner Prize.

  3. One thing I noticed when trying it out is that while it can respond plausibly to some questions, it is much less happy when you make a statement and expect a sensible response. But real conversations do include a mixture of questions and statements, from both parties.

    At one point I got the "And I forgot to ask you where you are from … " response that appears in the Scott Aaronson dialogue to which Eric Schwitzgebel links (comment number 2 above). That, and a couple of other responses in the Scott Aaronson dialogue, look like standard "This is getting difficult, let's try to change the subject" responses. Human beings, and even philosophers, use that tactic too, but not so blatantly, and not when the conversation is still so easy.

  4. Daniel Callcut

    Philosophy 101 question about the original Turing Test: presumably a computer may pass versus some interrogators and not others. So the machine counts as thinking and doesn't count as thinking, relative to the interrogator. Could someone tell me what the standard response is to this concern? What does Turing say, if anything?

  5. Moving off of D. Chalmers's comment and D. Callcut's comment I would like to add the following. There are some humans that probably could not pass some version of a t-test. And it is likely that some computers can pass a t-test where, as Dave points out, the bar is lowered and where, as Daniel, points out where there is a different interrogator. So: what is the significance of passing some t-test? Might we want to think about the fact that passing some t-test relative to some interrogator is like passing some exam. It just shows some level of intelligence? I think the media hype on passing the t-test is just overrated.

  6. Turing doesn't say anything explicit about this, at least in his original paper, but one can avoid the problem of different interrogators reaching different results by specifying that the machine passes the test only if it is identified as human in a certain percentage of all the trials that are carried out. That is how the test is applied in competitions.

  7. In the future chatbots will be able to pass the Turing test by cheating. I can see obvious weaknesses with the Eugene Goostman software, that can be improved – do not ask the same questions or give same answers over the entire conversation. Once a sentence has been used, remove it from the list. For sure there is a multitude of evasive answers or questions to choose from. If the judge asks the same question multiple times, provide an answer to show the awareness of a repeat question. The chatbot can even learn a multitude of natural language inference questions such as in the example "Is a tall building tall?" based on the syntax of the sentence ("tall building", "orange carrot", where it can determine the first word is a property of the second word. However, I don't think this approach will create intelligence, it will just mimic intelligence – but it may still be a useful feature.

  8. Clearly, the computer is more intelligent than the interrogators that believe they are talking to a human…

  9. The "researchers" at Reading who conducted this test are headline-grabbing charlatans.

Designed with WordPress