Chatbots are the Lowest of the Low – PZ Myers #2

purchase Clomiphene online australia

http://boscrowan.co.uk/2014/09/11/come-and-enjoy-cornwall-this-autumn/ September 18, 2014

Look, PZ Myers wrote about artificial intelligence again. That’s twice in two weeks, and again I can’t really disagree with his assessment because he says “Chatbots are boring. They aren’t AI.”

Dry waterfall at Yoga Station - Sept. 7, 2014

Dry waterfall at Yoga Station – Sept. 7, 2014 – You can’t really tell the perspective, but it’s about 4 meters high. (Enlarge)

Well, guess what, PZ? It’s boring to be always agreeing with you! That’s why I hardly ever comment on his stupid site. It’s too lame just to say “Great post” and that kind of stuff. If you can’t get into a nice knock-down, drag-out argument, then what’s the point of commenting at all? And anyway, he has too many commenters already, so anything I’d say would get buried.

Anyway, here’s what he says this time:

Chatbots are kind of the lowest of the low, the over-hyped fruit decaying at the base of the tree. They aren’t even particularly interesting. What you’ve got is basically a program that tries to parse spoken language, and then picks lines from a script that sort of correspond to whatever the interlocutor is talking about. There is no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.

Lot’s of people still want to call that AI, and that just goes to show how broad and vague the term is. People who want to talk about fully intelligent, conscious, living machines can’t just say AI anymore, but they have to qualify it by saying “real AI” or “strong AI” or “Artificial General Intelligence” or something like that.

But apparently there are people who think a real AI can somehow emerge, for no particular reason, from a chatbot. PZ Myers mentions David Hanson, who thinks real AI might spontaneously “wake up” or “catch on fire” somehow. Hanson does say the AI might “start to evolve spontaneously and unpredictably,” so he’s at least thinking about the concept of evolution. But how can something evolve if there’s no population, environment or selective pressure?

You can’t just sit there hoping for a miracle. You’ve got to give us some kind of practical explanation. Just piling up more and more data won’t make AI happen because AI is not data; it’s a system for processing and organizing data. That’s a big difference. You could have a huge mountain of data, and maybe nobody would care. But to build real AI, you’ve got to build a machine that cares. That’s the tricky part.

The comments on PZ’s site are pretty interesting reading too. One guy seems to deny PZ’s point that chatbots aren’t AI. He writes:

AI is a very diverse field and most researchers focus on specific areas of it. Some study natural language processing and produce things like chat-bots, voice recognition, and text compression algorithms. Some study graph searches and decision trees, and produce things like chess-bots. Some study classifiers. Some study emergent behaviour. Some study pattern recognition. Some study constraint satisfaction. There are probably a dozen other fields that I’ve never heard of.

Well, none of those things are real AI. They’re nowhere close to building a fully intelligent, conscious machine that is alive and morally equivalent to a human being. Maybe such researchers subscribe to some kind of “tipping point” theory and dream of their programs “waking up” someday, but that’s just a comforting delusion – unless they have a practical theory to explain how the machine might wake up.

That commenter didn’t mention neural networks, which I find frustrating. Maybe some mathematically minded researchers have argued that a neural network can never possibly achieve XYZ, etc. And maybe that’s why neural network development isn’t among the most visible fields of AI these days. But look at the human brain – it’s a neural network! If a neural network can’t possibly do something, then we humans can’t either.


Comments: