Would We be Cruel to Create a Real AI?
Sep 25, 2014
A while ago George Dvorsky was asking whether it would be “evil” to build a functional brain inside a computer. If we built a true AI entity, in other words, and if it were fully conscious like a human being, wouldn’t that AI entity suffer great emotional distress?
Dvorsky mentions several reasons why an AI might suffer, but I think he’s assuming too much human control and responsibility. He’s also taking a far too narrow view of what AI existence will really be like, or what AI entities will be capable of.
Certainly the AI beings will feel pain and experience emotional suffering, but it won’t be our fault. At most, we will be responsible for giving life to the AI. So if the AI love life, they will be grateful to us, but if they find life too painful and strenuous, then they will have us to blame.
Chatbots are the Lowest of the Low – PZ Myers #2
Sep 18, 2014
Look, PZ Myers wrote about artificial intelligence again. That’s twice in two weeks, and again I can’t really disagree with his assessment because he says “Chatbots are boring. They aren’t AI.” He says there is “no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.” And that’s obviously true.
But apparently there are people who think a real AI can somehow emerge, for no particular reason, from a chatbot. PZ Myers mentions David Hanson, who thinks real AI might spontaneously “wake up” or “catch on fire” somehow. Hanson does say the AI might “start to evolve spontaneously and unpredictably,” so he’s at least thinking about the concept of evolution. But how can something evolve if there’s no population, environment or selective pressure? You can’t just sit there hoping for a miracle. You’ve got to give us some kind of practical explanation.
AI Won’t Have Human-like Hygiene Problems – PZ Myers
Sep 10, 2014
How similar do you think machine intelligence will be to human intelligence? If machines don’t care about food, shelter and clothing, and if they don’t care about sex or marriage, and if they don’t have to deal with old age or death – what part of human existence will the AI machines share? What will we even have in common with AI at all?
The answer is that AI machines will be concerned about their long-term survival, or about transmitting their genes (memes) into the future. The AI will have this concern for the same reason we do – because they will have evolved by natural selection, just we did with our human intelligence. Thus, AI machines will have to be concerned about their evolutionary fitness. This is perhaps the only sense in which the AI will indeed have something in common with human intelligence.
Flash Fiction: Two Brainstorms about Trees
Sep 2, 2014
Sometimes I think the project of building a strong AI machine is like building a tree. Doesn’t that sound strange? We usually talk of growing a tree, but why don’t we build trees instead? Well, the same question applies to artificial intelligence.
In theory it’s possible to build a tree molecule-by-molecule in a very advanced 3-D printer, and that tree would be alive like any other tree. It would grow and eventually produce fertile seeds. It would carry out its metabolism in a seasonal cycle and heal itself when damaged or diseased.
On the other hand, if you want a tree, it’s much easier just to grow one the natural way from a seed. And the same is probably true for a true AI machine.
People Actually Prefer Taking Orders from Robots
Aug 26, 2014
Here’s something new – people actually prefer being led by AI, according to an MIT study. Who would have guessed? After all those Hollywood horror stories and all the scoffing at mindless unemotional robots, it turns out we actually like artificial intelligence.
The study involved complex assembly on a factory floor. When the human workers were instructed by a robot, they “reported feeling at their most efficient and effective.” It’s interesting that the human workers weren’t necessarily more efficient, but they did feel more efficient. That’s the cool thing, because we’ve always known machines were efficient. Only now are they saying that we like machines, too.
Well, what’s the explanation? The article was disappointingly silent on why people might be happier with an AI boss. So let’s brainstorm a bit. Here are three possible explanations, and for me the third seems particularly interesting.
Lots more posts -->