Chatbots are the Lowest of the Low – PZ Myers #2
Sep 18, 2014
Look, PZ Myers wrote about artificial intelligence again. That’s twice in two weeks, and again I can’t really disagree with his assessment because he says “Chatbots are boring. They aren’t AI.” He says there is “no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.” And that’s obviously true.
But apparently there are people who think a real AI can somehow emerge, for no particular reason, from a chatbot. PZ Myers mentions David Hanson, who thinks real AI might spontaneously “wake up” or “catch on fire” somehow. Hanson does say the AI might “start to evolve spontaneously and unpredictably,” so he’s at least thinking about the concept of evolution. But how can something evolve if there’s no population, environment or selective pressure? You can’t just sit there hoping for a miracle. You’ve got to give us some kind of practical explanation.
AI Won’t Have Human-like Hygiene Problems – PZ Myers
Sep 10, 2014
How similar do you think machine intelligence will be to human intelligence? If machines don’t care about food, shelter and clothing, and if they don’t care about sex or marriage, and if they don’t have to deal with old age or death – what part of human existence will the AI machines share? What will we even have in common with AI at all?
The answer is that AI machines will be concerned about their long-term survival, or about transmitting their genes (memes) into the future. The AI will have this concern for the same reason we do – because they will have evolved by natural selection, just we did with our human intelligence. Thus, AI machines will have to be concerned about their evolutionary fitness. This is perhaps the only sense in which the AI will indeed have something in common with human intelligence.
Flash Fiction: Two Brainstorms about Trees
Sep 2, 2014
Sometimes I think the project of building a strong AI machine is like building a tree. Doesn’t that sound strange? We usually talk of growing a tree, but why don’t we build trees instead? Well, the same question applies to artificial intelligence.
In theory it’s possible to build a tree molecule-by-molecule in a very advanced 3-D printer, and that tree would be alive like any other tree. It would grow and eventually produce fertile seeds. It would carry out its metabolism in a seasonal cycle and heal itself when damaged or diseased.
On the other hand, if you want a tree, it’s much easier just to grow one the natural way from a seed. And the same is probably true for a true AI machine.
People Actually Prefer Taking Orders from Robots
Aug 26, 2014
Here’s something new – people actually prefer being led by AI, according to an MIT study. Who would have guessed? After all those Hollywood horror stories and all the scoffing at mindless unemotional robots, it turns out we actually like artificial intelligence.
The study involved complex assembly on a factory floor. When the human workers were instructed by a robot, they “reported feeling at their most efficient and effective.” It’s interesting that the human workers weren’t necessarily more efficient, but they did feel more efficient. That’s the cool thing, because we’ve always known machines were efficient. Only now are they saying that we like machines, too.
Well, what’s the explanation? The article was disappointingly silent on why people might be happier with an AI boss. So let’s brainstorm a bit. Here are three possible explanations, and for me the third seems particularly interesting.
Six Ways to Recognize a Crackpot AI Theorist
Aug 19, 2014
Why haven’t we achieved general artificial intelligence yet? One reason is because the field of AI is full of crackpots! Too many weirdos with their crazy superstitious ideas and get-rich-quick schemes.
And they’ve scared off the smart people. At least that’s what Nick Bostrom seems to think. In an interview he said, “A lot of academics were wary of entering a field where there were a lot of crackpots or crazies. The crackpot factor deterred a lot of people for a long time.”
I suppose all honest, self-respecting AI theorists need to think about whether they themselves might be crackpots. So here goes. I won’t try to tell you I’m not a crackpot, but I’ll just give you a few suggestions for how you can decide for yourself. Here’s a quick list of six ways to recognize a crackpot.
Lots more posts -->