Christian Philosopher Jay Richards Mentions Searle, Jumps to Conclusions

can you buy Lyrica over the counter


Who says TV watchers are couch potatoes? This guy's pounding the pavement while engrossed in his hand-held drama series - Aoyama-dori, June 2013

buy Lyrica in thailand Who says TV watchers are couch potatoes? This guy’s pounding the pavement while engrossed in his hand-held drama series – Aoyama-dori, June 2013 (Enlarge)


July 22, 2013

I found this old article in the American Enterprise Institute’s online magazine. In it, Jay Richards refers to a Wall Street Journal article by John Searle in which he explains that Watson Doesn’t Know It Won on Jeopardy.

That WSJ article is behind a paywall, but judging from the three-paragraph intro, it sounds like Searle is just making his same point, that computation doesn’t equal intelligence and you still can’t get semantics from mere syntax. So far, so good.

I also found myself agreeing with Jay Richards to some extent, but his conclusion is that we’ll never create strong AI. Why so defeatist? Richards sounds like another philosopher who’s too happy to declare defeat and go home where everything is safe and comfortable. I just wanted to say, “Not so fast.”


Here’s one part of Richards’ piece that I agree with:


Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.


Hear hear. I’m so sick of the Turing test. It was a nice rule of thumb back in the 1950s when Turing was practically alone in the field and no one else had any idea about AI. But we’ve come far since then. Behaviorism and functionalism have come and gone. I think everyone should be able to admit that a machine can ace the Turing test and still be an unconscious piece of lifeless hardware.

After all, there’s a big difference between increasing the data or processing speed and actual intelligence. Intelligence means using data to pursue the root goal. Simply building up a bigger and bigger database brings you no closer to actually using that database. You’re still not pursuing the root goal just because you have data.

Then Richards starts getting into metaphysical waters:


We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same?


We didn’t just happen to become conscious, but we evolved. Sure, it could have been a cosmic accident when the first mutation happened that eventually led to consciousness, but natural selection took over from there. And since natural selection favored consciousness, it suggests to me that consciousness wasn’t really that big an accident. It would have eventually happened, given enough time.

And then Richards makes another statement I agree with. I think this is the crux of the issue:


We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers.


Yes, if we want to build a conscious AI, it will have to be qualitatively different from our current computers. The conscious AI machine can’t be an algorithmic system, but it must be a neural network.

Here’s a comment I made on the Wintry Knight blog:


I totally agree that “consciousness is qualitatively different from the type of computation that we have developed in computers.” This doesn’t mean strong AI is impossible. It just means we need a different kind of computer.

The AI will not be an algorithmic program, but it will be a neural network. It won’t be coded in a top-down manner, but it will evolve, just as we did.

Certainly it’s true that AI advocates have been too enthusiastic and unrealistic, but we should not err in the opposite direction either. There’s no reason to think strong AI is impossible.



Comments: