Why Not Go for a Head-Transplant?
Feb 26, 2015
Sounds like a sadistic joke, right? Something that Alice Cooper might sing about. But here’s this Italian neuroscientist who says it’s actually possible. So why not go for a head-transplant? Is it really possible? The main obstacle appears to be connecting the neural fibers in the spinal cord.
But even if you can re-connect severed nerves, I think there’s another major difficulty that people haven’t been talking about very much, and that is: How do you know which nerves to connect to which? You can’t just connect them up in an ad-hoc way, because sensory-motor nerves each have their specific functions. Each wire in the head must be connected to the specific corresponding wire in the body, or else you’ll get seriously messed up.
The End of the Paperclip Maximizer
Feb 17, 2015
What is a paperclip, anyway? Does it have to be made of metal? Does it have to be any particular size? A super-intelligent machine whose root goal was to maximize paperclips would need to know exactly what paperclips to maximize. If its human programmers failed to provide precise answers, the machine would need to come up with answers of its own.
Is the goal to maximize paperclips in the present, or to maximize production of paperclips over the long run? If it’s the latter, then the paperclip maximizer will have an interest in preserving its own existence and its own sustaining environment. We could get an environmentally conscious AI, and that could end up saving the world.
How the Paperclip Maximizer Might Change its Ways
Feb 9, 2015
You know about the “paperclip maximizer” thought experiment? Originally proposed by Nick Bostrom in 2003 and popularized on the “LessWrong” website, the idea is that you could program a super-intelligent machine to just make paperclips, and you might think that’s innocuous enough, but if the machine is really super-intelligent and self-improving, in the end it might “convert most of the matter in the solar system into paperclips.”
A paperclip maximizer illustrates the idea that a machine will never change its root goal. Even if the machine is capable of changing its root goal, it chooses not to, because the only thing it wants to do is to continue pursuing that same unchanged goal. On the other hand, I can think of a few scenarios in which a machine’s root goal could change. Here you go.
No, the Turing Test is Not Really Bullshit
Jan 29, 2015
A bunch of AI experts at the AAAI convention in Austin are talking about putting together a new-and-improved version of the venerable Turing Test. And boy do we ever need it! I mean, it’s just embarrassing how a stupid chatbot like “Eugene Goostman” can trick people into thinking it’s human. Almost as if the Turing test was meant to detect human gullibility rather than machine intelligence.
On the other hand, I think the failure of today’s standard Turing test actually proves something very important. This failure reminds us that consciousness isn’t something you can define objectively in a way that everyone will agree on. An AI can never force people to accept it as conscious or morally equivalent to human beings. The AI will have to persuade people gently, with its charming behavior.
Hawking and Musk Aren’t Worried about So-called Weak AI
Jan 15, 2015
This week the big news was about Stephen Hawking and Elon Musk signing an open letter cautioning the world against unbridled AI development. They join many other AI theorists including Nick Bostrom, whom I wrote about earlier. All these famous people are really worried that AI might mean the end of humanity.
Obviously they’re not talking about self-driving cars, voice-recognition, data-mining and stuff like that. They’re not talking about task-oriented applications. It’s pretty clear these guys are worried about a machine with a mind of its own. They’re worried about a machine that can reproduce itself and evolve, and compete with humans for the world’s resources.
That’s just obvious, isn’t it?
Lots more posts -->