No, the Turing Test is Not Really Bullshit
January 29, 2015
A bunch of experts at the AAAI convention in Austin are talking about putting together a new-and-improved version of the venerable Turing Test, as reported by Io9. And boy do we ever need it! I mean, it’s just embarrassing how a stupid chatbot like “Eugene Goostman” can trick people into thinking it’s human. Almost as if the Turing test was meant to detect human gullibility rather than machine intelligence.
AI researcher Gary F. Marcus of New York University said, “People within the field have realized the original test proves little.” Oh! Now you tell us. The great genius Alan Turing was just wrong then, huh?
Well, I think the failure of today’s standard Turing test actually proves something very important. There’s a lesson in this failure.
First, let’s recall the original reason for Turing to propose his test. It was because he couldn’t come up with a clear and detailed definition of consciousness that everyone would agree on. Therefore, instead of making a prescriptive definition, Turing figured we just “know it when we see it.” In other words, we’ll say that behavior indicates intelligence, or consciousness – as long as that behavior is convincing enough for enough people.
I think Turing made this decision on purpose. It’s not that he tried and failed to define consciousness objectively. It’s that he realized you can’t define it objectively. Or at least you can’t force all people to accept your authoritative definition. Turing’s insight – and the enduring value of his eponymous test – is that you can’t force people to believe a thing is conscious or intelligent. Instead, you must persuade people. And the way to persuade people that you are conscious is by your behavior.
There’s an inherent solipsism going on here, and when it comes to machine consciousness, we’re always going to have a lot of human solipsists. In other words, even when we have highly sophisticated AI in a realistically humanoid body, we will always have people who deny that the AI is “really” conscious.
There’s nothing we can do about that! We can’t force people to accept that our super-intelligent AI machine is truly conscious. We can’t make people get chummy with Star Trek’s Commander Data. We can’t make people grow into a father-son relationship with the Arnold Schwarzenegger robot from “Terminator 2” or any of the other highly advanced AIs we’ve dreamed up in fiction. Our future AI creation will just have to win people over with its charming personality.
Don’t be racist against AI!
Remember how long it took for people to get over racial prejudice? Oh yeah – we still haven’t overcome that. There’s no way you can force somebody to accept their differently colored neighbor as fully human and morally equivalent to themself. It’s a matter of getting used to each other. You have to live with those “different” people and interact with them for a while. Traditional attitudes die hard. You’re always going to have a few closet racists, and in the same way you’ll always have people who deny that super-intelligent AI machines are really conscious and morally equal to human beings.
The failure of today’s Turing test alerts us to expect “racial” prejudice against future AI machines. You heard it here first, folks: Don’t be racist against AI!