SciFi and the Singularity

The (technological) singularity, as described by its trusty Wikipedia article, is “the hypothesis that accelerating progress in technologies such as artificial intelligence will cause non-human intelligence to exceed human intelligence for the first time in history, causing human civilization to be radically changed or possibly destroyed.” Sometimes mentioned by name and other times merely implied, the singularity turns up in a variety of sci-fi works ranging from Isaac Asimov’s I, Robot collection to the Futurama episode “Benderama.”

Many sci-fi stories that deal with advanced AI can be boiled down to a simple answer to the question: “Once we reach the singularity, what will happen next?”

Perhaps the most popular answer is: “The machines will take over. Violently.” It’s the idea behind the Terminator franchise, the Matrix movies, and even 2001: A Space Odyssey, or at least the parts of it dealing with HAL 9000. Why do we like this answer so much? What do we find so convincing about the idea that the same mechanisms that keep video games interesting, figure out what you really meant to Google, and help Siri understand what you just said today will be out for our blood tomorrow? There seems to be a mistrust surrounding the idea of inhuman intelligence, which brings me to another popular answer:

“The machines will want become human.” Data from Star Trek: The Next Generation, David from A.I. Artificial Intelligence, and Andrew from Bicentennial Man all represent fictional AIs that covet the human condition over their own. Characters in this vein tend to be far cuddlier than the cold, calculating killers from films like The Terminator. Like the examples I listed they tend to be androids—therefore looking human—and they’re often portrayed as curious and empathetic rather than violent and vindictive, as if that first set of traits can be learned from humanity but that second must come from whatever black void gave birth to computers.

What will happen next once we reach the singularity? Of course it’s impossible to know, since the sort of intelligence it would take to predict won’t exist until it’s past time for prediction. It is possible however to hope that, having come from humanity, the next great intelligence born on this planet will have humanity’s best qualities, and maybe even mercifully lack its worst.

Comment Stream

2 years ago
0

very thought provoking! that movie" 2001 space odyessey" which i believe is what your picture is from is a very creepy movie. its honestly kind of funny that some people believed back then that by now we would be living on mars or something like that. If youve ever seen "star trek first contact" the character you talked about "data" does turn against his crew but only because he is reprogramed. I believe the only down side to AI in the future is the possibliity for other to "hack" technologies and program them do different tasks with different intentions.

2 years ago
0

very thought provoking! that movie" 2001 space odyessey" which i believe is what your picture is from is a very creepy movie. its honestly kind of funny that some people believed back then that by now we would be living on mars or something like that. If youve ever seen "star trek first contact" the character you talked about "data" does turn against his crew but only because he is reprogramed. I believe the only down side to AI in the future is the possibliity for other to "hack" technologies and program them do different tasks with different intentions.