Jaron Lanier Says AI Philosophy is More Dangerous than AI

http://blumberger.net/author/admin/page/3/

buy Ivermectin europe November 26, 2014

Jaron Lanier was interviewed on The Edge and said various outrageous things. He said, “If AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.” Does anyone really buy that? I mean, we’ve heard all sorts of comforting explanations for why we don’t need to worry about AI, but this is a new one.

Monkey skeleton at the Akasaka Intercity Bldg. - Nov. 19, 2014

Monkey skeleton at the Akasaka Intercity Bldg. – Nov. 19, 2014 (Enlarge)

A bit later, Lanier talked about AI philosophy as a kind of “religious narrative that’s a version of the Frankenstein myth.” So it’s clear he doesn’t think real AI is ever going to happen. He doesn’t think an artificial general intelligence can ever be human-equivalent. He thinks such an ambition is a kind of religion, by which he apparently means nothing more than that it’s false.

Wait – what’s so dangerous about AI philosophy? Just that it bothers engineers, according to Lanier. Talking about true AI distracts the serious coders from working on their latest image-recognition software, or something. Lanier says that AI philosophy will “create a series of negative consequences that undermine engineering practice … and also undermine the economy.” Well shucks.

See, everyone knows there are two kinds of AI – weak and strong. Weak AI is like self-driving cars, assembly line robots and other dedicated algorithmic programs. They do cool stuff and all, but they’re basically still just computer programs, still the same type of thing as a wordprocessor. On the other hand, strong AI is quite different because it’s like a human being. Strong AI can think for itself and pursue its own goals. Strong AI is morally equivalent to a human being.

I think weak AI is the distraction, and we shouldn’t fool ourselves into thinking such algorithmic programs will magically emerge into real AI. We shouldn’t imagine that a car-navigation system can be the same type of thing as a human brain. We shouldn’t fall into a god-of-the-gaps fallacy by thinking sheer complexity will somehow spawn strong AI. That’s what I would call religious thinking.

Jaron Lanier seems to think the opposite. He says working on true AI is the waste of time. This is interesting because it’s the first time I’ve heard anyone say true AI ambitions were misguided in principle and even dangerous.

So this is great – why don’t we totally divorce the two kinds of “AI” and develop them in totally different directions? I certainly don’t want to distract any engineers working on lucrative weak AI programs.

All we need is a new name for “weak AI.” See, we shouldn’t call it AI anymore if it’s just an algorithm like Wordstar or whatever. Let’s call it “cool programming” or something like that. But when we say “AI,” we should only mean true AI, a thinking machine that is conscious, struggling for its own survival and morally equivalent to a human being.


Comments: