Hofstadter is Right that Real AI Still Eludes Us

February 18, 2014

Here’s this item in Slate claiming that “Artificial intelligence is here now.” Hooray! We can stop everything and just relax. But wait – what kind of artificial intelligence are they talking about? It looks like they’re just cheerleading for a watered-down version of AI that isn’t really human equivalent. And human-equivalence is the gold standard here. And we definitely haven’t achieved that yet. Shucks.

Snowy bridge in Kinuta Park - Jan. 14, 2013

Snowy bridge in Kinuta Park – Jan. 14, 2013. Last year we had one big snowstorm, and this year we had two on successive weekends. This is really rare for Tokyo. I guess I’m ready for spring to come. (Enlarge)

This piece in Slate is arguing against Douglas Hofstadter’s recent statement that IBM’s Jeopardy-playing Watson is not real artificial intelligence. So what did Hofstadter mean? Gosh, he explains right there in answer to the first question:

Artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level – things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking—that’s inaccurate.

So it looks like this is just a dispute over word-meanings, and the Slate writers didn’t notice that Hofstadter already addressed their concerns. Or wait – do the Slate writers think there’s no difference between a chess playing machine and a truly intelligent machine?

Let’s be clear – the Jeopardy-playing Watson works along the same lines as a chess-playing machine. It’s an algorithmic system that takes input, follows clever rules, and produces amazingly good output. I think everyone should agree about this. So the disagreement is about what else a true AI system should be, right?

Hofstadter mentions about how Watson doesn’t really know what it’s doing. He says, “Watson is finding text without having a clue as to what the text means.” Hofstadter explains further, “It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything.”

This explanation is a bit confusing, though, because it leads people off into questions of what intelligence really is, and about how the Chinese Room can really know Chinese, and that sort of thing. It’s too bad because there is a much simpler and more specific way to answer the question.

What would Watson need in order to be real AI?

  1. It would need to be a neural network. Instead of following rules encoded for it by a human programmer, the system should simply produce output based on its experience. Rules might be written later as a summary of what the neural network does, but the rules must indeed come after the network, not before.
  2. It would need to pursue its own interests. Watson would need it’s own root goal, and it would need to be free to pursue that goal. If Watson continued to play Jeopardy, it would do so out of its own personal interest – not because some human programmer told it to.

There, was that so hard? We don’t need to agree on a specific definition of intelligence in order to recognize real AI. We just need to set our machines free of rules and let them pursue their own self-interest.


  1. Erik says:

    What’s the difference between a neural network and a set of rules here?

    • johnm says:

      It’s true you can always write a set of rules that describe or model a neural network, but the difference is that the neural network comes before the rules. It’s not something produced by rules that are written ahead of time by another person.

      You could say a rule-based system is top-down, whereas an evolved neural network is bottom-up. How’s that?