Real AI Might Not Even Pass the Turing Test Anyway
June 11, 2014
Lots of news sites are saying this chatbot passed the Turing test (here, here, here), which is totally unprecedented, but other sites (here, here, here) say it’s all bullshit. Hmm, it does indeed look like some kind of journalistic scam. On the other hand, who even cares about the stupid Turing test anyway? It’s obviously just a contest for chatbots, and it has little to do with real AI.
See, the point of the Turing test is for an AI to pretend to be human or to emulate human intelligence, but real AI is supposed to surpass human intelligence. Real AI won’t imitate us, but it will grow beyond our human limitations into something entirely new.
Just consider: If you told a real AI to pretend to be human, there’s a good chance that AI will feel offended. Imagine how you would feel if someone told you in all seriousness to act like a cockroach. That’s how things might look from the AI point of view.
Also, the AI might refuse to engage in the kind of dishonesty that is the very core of the Turing test. See, if you were chatting with the AI, you could just ask whether it was human, and the AI would probably admit it was not. Bingo! The AI just might not feel like lying to you.
The Turing test is one long string of lies all the way through. Hey, another simple way to catch an AI is to ask it a quick math question, like “What’s the square root of 3,471?” A human being would say, “How should I know?” or else the human would get a piece of paper and pencil and take several minutes to come up with the answer. A real AI, on the other hand, would answer immediately: “It’s approximately 58.9152.”
So a real artificial general intelligence machine might not even pass the Turing test. And that would be perfectly OK. It wouldn’t mean the AI wasn’t a real AI. It wouldn’t mean the machine wasn’t fully intelligent, conscious and alive. It would just mean that we need a new kind of test! We will need a new understanding of what intelligence is, in the first place.