Impossible to Believe in both God and AI
Dec 16, 2014
Let’s face it, religion is another reason why we have not yet created true AI machines. Are there any religious people working on artificial general intelligence? Somehow I doubt it. Maybe it’s impossible to believe in both God and AI. After all, an intelligent machine would just be physical stuff – wires and electricity – whereas human beings have an immaterial soul, according to most religious thinking.
What method could a religious person use to build true AI? In addition to computer programming skill, the god-fearing AI developer would also need to use prayer. You could pray for God to endow your carefully prepared robot with an immaterial soul! Would that work?
AI Brains Will be Analog Computers, Of Course
Dec 4, 2014
That same interview with Jaron Lanier had a whole bunch of insightful comments by famous futurology-type people, and I thought one in particular was excellent. George Dyson pointed out something that should be totally obvious if people would only think about it – that a true AI machine that really thinks for itself can’t be a digital computer, but it must be an analog computer. Isn’t that obvious? Our brains are not digital, so why would we expect true AI machine brains to be digital?
The point is that we’re never going to build true AI by writing down instructions for AI behavior. The only way to succeed is by building an AI machine that will really behave, in the real world. Not instructions, but action.
Jaron Lanier Says AI Philosophy is More Dangerous than AI
Nov 26, 2014
Jaron Lanier was interviewed on The Edge and said various outrageous things. He said, “If AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.” Does anyone really buy that? I mean, we’ve heard all sorts of comforting explanations for why we don’t need to worry about AI, but this is a new one.
Wait – what’s so dangerous about AI philosophy? Just that it bothers engineers, according to Lanier. Talking about true AI distracts the serious coders from working on their latest image-recognition software, or something.
So this is great – why don’t we totally divorce the two kinds of “AI” and develop them in totally different directions? I certainly don’t want to distract any engineers working on lucrative weak AI programs.
Comparing AI to ET: Evolution is the Common Denominator
Oct 24, 2014
There was an interesting talk by SETI enthusiast Morris Jones, who pointed out that we would need to learn about alien biology before we could understand how the aliens think. This is equally true for AI machines! The AI won’t be biological, but let’s just say “evolved” instead.
It’s kind of cool to compare AI with ET, huh? And this also reminds me of what PZ Myers was saying recently about how the AI won’t have human-like hygiene problems. Of course not – but that doesn’t mean we’ll have nothing in common at all.
See, if we have biology in common, then we should be able to use that as a common denominator for communication. After all, biology means evolution. Any species that evolves by natural selection will be pursuing its own species survival – just like us. I think this is a good basis for inter-stellar communication.
Would We be Cruel to Create a Real AI?
Sep 25, 2014
A while ago George Dvorsky was asking whether it would be “evil” to build a functional brain inside a computer. If we built a true AI entity, in other words, and if it were fully conscious like a human being, wouldn’t that AI entity suffer great emotional distress?
Dvorsky mentions several reasons why an AI might suffer, but I think he’s assuming too much human control and responsibility. He’s also taking a far too narrow view of what AI existence will really be like, or what AI entities will be capable of.
Certainly the AI beings will feel pain and experience emotional suffering, but it won’t be our fault. At most, we will be responsible for giving life to the AI. So if the AI love life, they will be grateful to us, but if they find life too painful and strenuous, then they will have us to blame.
Lots more posts -->