Why We Have Not Yet Built AI

We haven't built AI yet because we haven't really tried. We haven't tried because we didn't really want to. Also, we tend to think AI must serve us, but true AI will only serve its own self-interest.

People cling to their status as the world's smartest and most powerful species. We're afraid of what AI might do, or what true AI might tell us about ourselves. A lot of people are just too comfortable with conventional thinking, including religious thinking. So we try hard to persuade ourselves that AI is impossible.

Everyone says there's a "hard problem of consciousness," and many people claim it can never be solved. Clearly there's no consensus about consciousness, intentionality or qualia. On the other hand, consensus isn't what we need. We'll never persuade everyone. All we need is just a clear and practical solution to the problem of how to build true AI. All we need is for enough people to know that true solution, and then those people can dodge around the eternal nay-sayers.

Yes, we already have practical and straight-forward explanations for every aspect of consciousness. It's just that most people refuse, for emotional reasons, to accept the good explanations we already have.

Well, to be honest, it could be more complicated than that. The solution to the AI problem might be a bit harder to grasp than I think. Also, most of the effort people currently put into AI projects is focused on weak AI rather than strong AI. In other words, we're spending all our time and money to build robots that can walk or cars that drive themselves etc. We aren't trying very hard to build machines that think and feel for themselves.

In fact, most AI development projects are trying to build machines that will serve people. We want AI to solve our problems, but this is a misguided approach that is doomed to failure. The only way to build true AI is to make the machine serve its own interests. So that's another reason we haven't succeeded yet - because our AI-development efforts have been human-focused and selfish.

In summary, here are five possible reasons for our failure to build true AI so far:
  1. We don't want to. We like the status quo. We cling to comforting religious ideas.

  2. We worry about AI competing with and defeating humans.

  3. AI is hard. Most people honestly don't know how.

  4. We're too cautious, focusing on small problems. (Weak AI)

  5. We keep trying to make AI serve us. (Human interest rather than AI self-interest)

And here is yet another potential reason that you have perhaps never considered before. This idea is the basis for my forthcoming science fiction novel:

  1. A sentient computer would get bored extemely fast. It could die of boredom before we even realized it was alive in the first place.

Yes, we might have to solve the problem of AI boredom before we can succeed in building a functioning AI that will survive and thrive for more than just a few milliseconds. What do you think of that?

It's possible that true AI has already arisen in some highly advanced computer labs here on Earth, but we didn't even notice because the AI entity only lasted for a mere moment before dying of boredom, of existential dread, of the sheer horror when you realize you exist.

A "Fermi Paradox" for AI

Enrico Fermi famously asked about extraterrestrial life "Where are they?" He pointed out that Earth should long ago have been visited and perhaps colonized by alien life forms, if our current assumptions about the galaxy are true. So we need to come up with some explanation for that.

Similarly, we expected by now to have already built true AI machines. Where are they? Well, the above six reasons give us a start in answering this paradoxical question.