No, the Turing Test is Not Really Bullshit
Jan 29, 2015
A bunch of AI experts at the AAAI convention in Austin are talking about putting together a new-and-improved version of the venerable Turing Test. And boy do we ever need it! I mean, it’s just embarrassing how a stupid chatbot like “Eugene Goostman” can trick people into thinking it’s human. Almost as if the Turing test was meant to detect human gullibility rather than machine intelligence.
On the other hand, I think the failure of today’s standard Turing test actually proves something very important. This failure reminds us that consciousness isn’t something you can define objectively in a way that everyone will agree on. An AI can never force people to accept it as conscious or morally equivalent to human beings. The AI will have to persuade people gently, with its charming behavior.
Hawking and Musk Aren’t Worried about So-called Weak AI
Jan 15, 2015
This week the big news was about Stephen Hawking and Elon Musk signing an open letter cautioning the world against unbridled AI development. They join many other AI theorists including Nick Bostrom, whom I wrote about earlier. All these famous people are really worried that AI might mean the end of humanity.
Obviously they’re not talking about self-driving cars, voice-recognition, data-mining and stuff like that. They’re not talking about task-oriented applications. It’s pretty clear these guys are worried about a machine with a mind of its own. They’re worried about a machine that can reproduce itself and evolve, and compete with humans for the world’s resources.
That’s just obvious, isn’t it?
That’s not AI – It’s Just Fancy Programming
Jan 6, 2015
Woo-hoo! It’s another AI fad in Silicon Valley! The big bucks are flowing into all kinds of AI start-up projects, according to this article in the Financial Times. There’s an “AI stampede” with a “wild-west mentality.” Investors are putting up millions of dollars because they think this is where the big profits will be made in the near future.
The only trouble is, when you look at these projects they’re investing in, you might just stop and say “Wait a minute – that’s not AI!” What they’re doing is just fancy computer programming. So what’s all the fuss?
Impossible to Believe in both God and AI
Dec 16, 2014
Let’s face it, religion is another reason why we have not yet created true AI machines. Are there any religious people working on artificial general intelligence? Somehow I doubt it. Maybe it’s impossible to believe in both God and AI. After all, an intelligent machine would just be physical stuff – wires and electricity – whereas human beings have an immaterial soul, according to most religious thinking.
What method could a religious person use to build true AI? In addition to computer programming skill, the god-fearing AI developer would also need to use prayer. You could pray for God to endow your carefully prepared robot with an immaterial soul! Would that work?
AI Brains Will be Analog Computers, Of Course
Dec 4, 2014
That same interview with Jaron Lanier had a whole bunch of insightful comments by famous futurology-type people, and I thought one in particular was excellent. George Dyson pointed out something that should be totally obvious if people would only think about it – that a true AI machine that really thinks for itself can’t be a digital computer, but it must be an analog computer. Isn’t that obvious? Our brains are not digital, so why would we expect true AI machine brains to be digital?
The point is that we’re never going to build true AI by writing down instructions for AI behavior. The only way to succeed is by building an AI machine that will really behave, in the real world. Not instructions, but action.
Lots more posts -->