Artificial Intelligence: Malicious in Movies, Benevolent in Books

Why is it that the portrayal of artificial intelligence in movies and books are contradictory? Almost universally, movies portray AI as adversarial to humans.

Recent movies such as this year?s Avengers: Age of Ultron and Ex Machina each offer their own interpretation of the idea that AIs can’t be trusted. In both, humans become the victims of AI free will, although the scale is vastly different. There have been hints of this malevolent interpretation in movies for some time, and can invariably be traced back to Hal in 2001: A Space Odyssey. And between then and now, we’ve had The Matrix and the granddaddy of all malevolent AIs, Terminator.

One of the few examples that breaks that mould, showing AIs as victims of humanity’s baser impulses, is AI: Artificial Intelligence. Here we have an AI that wants nothing more than to be loved by a human and is rejected repeatedly. Another exception that I can think of is War Games, where the AI realizes that total nuclear war is unwinnable and refuses to play.

But what’s most interesting is I, Robot, based on Isaac Asimov’s ‘robots’ series of books. The books explicitly and repeatedly state that in that future, all AIs will adhere to three laws that are destined to keep them from harming humans. The movie subverts this, and its robots can certainly hurt people.

Yet many successful utopian book series have benevolent AIs as an underpinning of that very utopian-ness. Think of the Culture series by Iain M Banks, or the Commonwealth Series by Peter F. Hamilton. True, Dune talks about the banning of thinking machines because they once rebelled against humans, but I wouldn’t call Dune’s post-AI existance utopian anyway.

I think the argument is more muddled for TV, probably because there is invariably many more hours of it.

Classic Star Trek had many episodes about bad AI, from Dr Daystrom’s M5 to the Serpent that kept people innocent and free of sin, to Landru and Nurse Chapel’s lover/android, Roger Korby. AI was rarely if ever seen as benevolent. Now, there’s a TV show called Person of Interest in which not one but two AIs are trying to control humanity. In between, there’s been a lot of hours of a bit of both:

  • Battlestar Galactica, both versions, were clearly about AIs wanting to exterminate pesky humans, although the reimagined series complicated the question by having them interbreed.
  • Data from Star Trek: The Next Generation represents a more benevolent AI, one that although superior in almost every way still choses to participate in life’s social and moral uncertainties.

I’ve noticed this disconnect between how books and movies portray artificial intelligence but I don’t have a clear explanation for it. Perhaps I have an observer’s bias and this is completely wrong. If you could use the comment space below to help me flesh out either this obvious disconnect or my obvious bias, I’d appreciate it.

Leave a Reply