Finicky Machine Spurns Google’s Call, Refuses to Recognize Cats

buy Lyrica online uk

Daytona Beach Jan. 9, 2014 – Here’s an intriguing article with a great headline: Artificial Intelligence? More like Artificial Idiocy. Ha ha! It’s still so easy to laugh at AI even today. But this article includes some interesting news.

Tree roots in Kinuta Park, Aug. 15, 2013

Tree roots in Kinuta Park, Aug. 15, 2013 (Enlarge)

It talks about a Google effort in 2012 to teach a computer to recognize cats. They showed the computer 10 million cat photos, and then they apparently just hoped that massive data would somehow work out the problem. No luck. Surely the Google program was a neural network, so does this mean there’s something wrong with the neural network model of intelligence?

My guess is that the Google team didn’t really train their program to find cats. They trained it to do something statistically related to cats, but that’s not the same as finding them. What the researchers need to do first is train the machine to want to find cats. If they can do that, then the machine will figure out the rest for itself.

Yes, the key problem here is not with the neural network model, but it’s with computer motivation. How can you make a computer want to succeed, or want anything at all? I wonder if the Google people have an answer.

My own theory says that motivation (both computer and human motivation) is a matter of energy flow. Desire is like water flowing downhill. It’s neural-electric energy flowing through our brains and out to our muscles, causing us to do things that we identify – perhaps retroactively – as objects of our desire.

Anyway, I’m now looking again at this article’s sub-header: “The greatest illusion that ‘big data’ embodies is that information is ever separate from life.” Wow, they’ve hit the nail on the head here. But the message is not that machine intelligence is impossible; the message is that machine intelligence must be alive!

Being alive means wanting your own wants. It means pursuing your own root goal of species survival. The Google experiment in cat-recognition failed because they were telling the computer what to do. They wanted the computer to see cats, but the computer didn’t. Cats weren’t useful for the computer’s survival struggle. In fact, the computer wasn’t alive at all, because it wasn’t struggling for its own survival.

No wonder the project failed. No wonder people are still laughing at highly ambitious AI projects.


Comments: