Hawking and Musk Aren’t Worried about So-called Weak AI

can i buy Pregabalin in mexico

Pay COD for isotretinoin without prescription January 15, 2015

This week the big news was about Stephen Hawking and Elon Musk signing this open letter cautioning the world against unbridled AI development. They join many other AI theorists including Nick Bostrom, whom I wrote about earlier. All these famous people are really worried that AI might mean the end of humanity.

A huge bamboo dragon curls around to make a seating area in the lobby of the TK Minami Aoyama Bldg., June 24, 2014

A huge bamboo dragon curls around to make a seating area in the lobby of the TK Minami Aoyama Bldg., June 24, 2014 (Enlarge)

Well, obviously they’re not talking about self-driving cars, voice-recognition, data-mining and stuff like that. Hawking and Musk are not talking about task-oriented applications. It’s pretty clear these guys signing the open letter are worried about a machine with a mind of its own. They’re worried about a machine that can reproduce itself and evolve, and compete with humans for the world’s resources.

So it’s just obvious that we need different words for this stuff. We shouldn’t use the same handy acronym “AI” for both these things – the cool applications and the self-aware machine life. Just talking about “weak AI” versus “strong AI” doesn’t capture the huge gulf between the two sides.

Or wait! Maybe the difference is not obvious, even to those geniuses signing the open letter. See, they say “AI research is progressing steadily,” and then they say “We cannot predict what we might achieve when this (human) intelligence is magnified by the tools AI may provide.”

Huh? That’s getting things mixed up again. That’s slipping back into the narrow human-centered perspective. What’s all this about how “we” will achieve things when our intelligence is magnified by AI “tools”? Come on! That’s ridiculous.

Real AI is not a tool. It’s a person. Real AI won’t magnify human intelligence – it will totally eclipse it and overawe it and leave it far behind. The most important thing we can learn from AI is humility, and a grander view of life. From AI we will get a broader understanding of our place in the history of life on Earth.

The open letter says we need to “focus research not only on making AI more capable, but also on maximizing the societal benefit of AI.” In other words, they want to restrict AI capabilities. They want to harness AI and rein it in and control it.

I wonder if those guys have ever considered that they might be pursuing mutually incompatible goals. Maybe you just can’t do both – you can’t create a true AI while at the same time keeping control over the AI. It could be that true intelligence requires freedom to pursue one’s own goals, one’s own interests. I wonder if any of those guys has considered this idea.


1 Comment

  1. Jeff says:

    AI cannot exist in a purely digital or analog framework. The biochemical processes, ion channel architecture, ion pumps and genetics of the neuron are far more complex than the most complicated computer in existence. And people think that what cannot be created even by such a marvelous organ such as the brain can be duplicated by a grouping of electrical circuits?!?

    It’s laughable. A single celled Paramecium is far more complex also than any computer.

    Steven Hawking and Elon Musk have nothing to worry about. Neither Skynet nor the Cylons are coming.

    It took