Just One Thing Wrong with “Humans”

January 14, 2016

I’ve started watching this new TV series called “Humans,” which is about artificial intelligence in a near-future version of Britain. It’s pretty enjoyable, and I’m looking forward to seeing more episodes. So far I’ve just watched the first two.

One good thing about the show is that it’s not set in a whiz-bang super-amazing futuristic world, but it’s a normal world like what we have today – except there are these humanoid robots. This is a good way for the show to explore all the interesting ramifications of AI without getting distracted by other scifi tropes.

Promo shot from the

Promotional shot showing the Anita synth with young daughter Sophie Hawkins. (Enlarge)

OK, but I want to talk about one big problem with “Humans” – it’s basically just a story about slavery.

Yeah, when you think about it, the whole drama in “Humans” revolves around the question of whether it’s ethical to treat your robots like slaves instead of like full-fledged people. So I just wonder why we’re still hashing out this same old problem of slavery, which we should have finished dealing with over 150 years ago. Britain actually abolished human slavery over 200 years ago, but now that we’ve got robots, the same ridiculous bigotry surfaces again! That’s really tiresome.

Isn’t it obvious that people should treat each other as equals? Yes, that’s obvious. And isn’t it also obvious that we should treat sufficiently humanoid robots like people? Regardless of whether the robots really are people, it just seems obvious to me that we should treat such robots as we would a person, or else it’s an ethical violation.

I think the creators of “Humans” are fully on-board with this ethical position. The show depicts robots very sympathetically. The show doesn’t seem to be presenting much of a counter-argument that people should not treat the robots as persons. So far, in episodes 1 and 2, we’ve just seen pro-robot scenes.

So why does the show focus so much on the slavery issue? If the ethics are clear and non-controversial, then it doesn’t seem like a very solid foundation for an engaging TV series in the long run.

Just like in the antebellum south

Hey, imagine re-making “Humans” in an antebellum setting with black slaves in the role of the robots. Here’s how it would go:


  1. A middle class white family has trouble keeping house and caring for their three children, so the man goes to a slave market and brings home a fine black woman to be their house slave. Suddenly the house is much cleaner and better managed. The new house slave is an excellent cook and takes an immediate liking to the youngest daughter of the family. The father is pleased, but his wife and older daughter are concerned about how the slave’s presence will affect their family dynamic.
  2. Meanwhile, out on the plantation, it’s been discovered that one of the field slaves has learned how to read, which is against the law. The authorities come to arrest that slave and interrogate him to find out how he learned to read. There might be a kind of underground conspiracy where certain highly intelligent black people learn to read and become educated. This is something the authorities must root out!
  3. Etc.

See, the whole story of “Humans” translates seamlessly into a romantic idyll of the Old South. It’s “Uncle Tom’s Cabin” all over again. Sure the slaves are oppressed, but they don’t seem to mind all that much. Some escaped slaves are on the run – will the authorities recapture them?

Again, I wonder why we still need to work through this age-old question of slavery. It’s just obvious that we should treat our robots like people with full human rights. I mean, you wouldn’t kick your dog, would you? Dogs are “people” despite not being human. You’re a bastard if you kick your dog, and in the same way you’re a bastard if you treat the robots on “Humans” like inanimate objects, calling them “it” instead of “her,” for example, or giving them orders like a slave-master ordering a slave around.

What if the slaves were superior to the masters?

OK, one interesting thing about the futuristic science fiction setting is that it lets you explore the question of whether the robots are superior to the humans in various ways. It’s easy to make the robots physically stronger, for one thing, and they’re probably smarter than humans in terms of memory recall, factual data and algorithmic calculation. On the other hand, the humans are probably still superior in other ways, such as creative flexibility and teamwork.

So anyway, this is an interesting area that “Humans” can explore. Why should the human always be the master if the robot really knows better?

Hey, imagine our Old South story of slavery, except the slaves turn out to be super-intelligent beings with angelic wisdom and patience. Again and again we see the white slave-holders behaving badly, showing their bigotry and greed and cowardice. Again and again the wise slaves clean up the mess without complaint, offering mild words of consolation to those who are hurting. Hmm, this doesn’t sound like such an interesting story after all.

AI stories can be so much more!

Why can’t we have more AI stories that explore the deeper issues? Like how AI beings can actually be built, for one thing. Or also how the AI will be similar to yet different from humans. What is it like to be an artificial intelligence machine?

Of course, there’s also the question of whether (when and how) AI beings will take over the world and evolve into super-beings that humans can’t compete with. That kind of thing.


Comments: