Lawrence Krauss Won’t Worry, but Most People Should – a Little
May 27, 2015
Lawrence Krauss wrote a brief thing about AI for this year’s Edge question that was reposted on the IEET site, and it’s breezy as you might expect from someone who is derided for his lack of philosophical subtlety. He says, “What, me worry?”
Personally, I’m also not too worried about an AI apocalypse, even though I do recognize some dangers. Actually, I think there are two different dangers we could consider:
- The stereotypical science fiction scenario where the AI becomes self-aware and then decides to wage war against humanity, like in the Terminator films and other stories.
- A more mundane and subtle scenario in which we humans gradually become more dependent on machines as tools, and then the machines get more and more complex and capable, and then we can no longer even survive without our machine helpers, and we also cannot tweak and adjust our machines anymore because they are too complex.
In the second scenario, our helper machines might actually be under the control of some big corporation that is stealing our personal data and manipulating us to purchase its products or just behave in ways that serve the corporation’s interests. Thus, we would be enslaved by the corporation.
This kind of thing is already happening, of course. When was the last time you tinkered with your car under the hood? Back in the 1980s? And how many people actually root their phones? Have you ever tried using an iPhone without using the iTunes software? What if you don’t like iTunes? Do you really have a choice?
Today lots of people depend on Facebook, but who can really control their data on Facebook? We’re fairly helpless when the Facebook company decides to tweak their software to serve their special corporate interests. We just have a simple choice to accept it or drop out of Facebook altogether.
But someday soon, we might not be able to drop Facebook or stop using Apple hardware. Someday soon our lives might depend on such things. If Jeb Bush becomes president, our whole healthcare system might be tied to the Apple watch! Sure that sounds silly right now, and it’s never going to happen that way, but this kind of thing is the more realistic “evil AI” scenario.
Today’s commercial software is already too complex for most people to recode by themselves, even if it weren’t proprietary and closed source. The software preferences that users can tweak are ridiculously narrow and superficial. Users today are being trained to accept their helplessness and adapt themselves to a one-size-fits-all model controlled by the big software developers.
That’s what I’m worried about. Not for myself, actually, since I mostly stick to free open-source software that I can actually program. But I realize I’m in a very small minority here, and most people think I’m a bit crazy since I’m not totally up-to-date with all the latest cool social media platforms. Hmm.
Comments: