The Singularity Q&A

Q: What is the intellectual history of the Singularity concept?

A: John Smart gives a thorough treatment of this question on his SingularityWatch website (though he calls it "brief"), so I will not duplicate the effort in this space.  Instead, I will give you my own, shorter version, in which I introduce a few of the early pioneers. [Note: John Smart has decided to eliminate the word Singularity throughout his site, replacing it with 'accelerating change.']

I start back with the beginning of modern science fiction, which Mary Shelley is traditionally regarded as having christened with her 1881 tale Frankenstein, or the Modern Prometheus (you can read it here).  She earned this distinction by creating a speculative ethical dilemma resulting directly from scientific progress.  Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator.  The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity.

Early work with computers in the 1930's and 40's allowed writers to present more specific visions of how greater-than-human intelligence might come about.  Alan Turing, instrumental in computer development during and after World War II, was clearly speculating on Artificial Intelligence by 1950, the year he wrote a paper describing what would come to be known as the "Turing Test" benchmark for intelligent machines.  Turing placed no limits on what artificial minds might accomplish, convinced that there was nothing done by the human brain which could not ultimately be done with equal or greater prowess by some mechanical equivalent.

Specific ways by which developmental Singularities might occur were also beginning to appear around this time, with another early computer scientist, John Von Neumann, articulating the idea of machines which could store their programming instructions in the form of data, rather than wiring.  (Today such machines are called "Von Neumann machines, of which your own computer is an example.)  These instructions, he mused, might eventually include those needed to reproduce the machine itself.  Robots making robots making robots... a situation which, if fully realized, would result in exponentially skyrocketing productivity and economic growth -- hallmarks of a developmental Singularity.

The most logical application of Von Neumann machines, figuring into pretty much every Singularity scenario, would be in nanotechnology, a scientific enterprise kicked off in style with Richard Feynman's 1959 talk There's Plenty of Room at the Bottom.  The extreme miniaturization of computers and other machines used today, Feynman realized, would offer tremendous benefits.  But achieving this would require very clever engineering due to the forces at play at the very small scales he, as a physicist, knew so well.  The best way to build something ultra tiny is with something equally tiny; enter the Von Neumann replicators.

Over the past few decades, many others have helped flesh out the ideas presented by the pioneers I have mentioned, a number of whom are sure to come up in my responses to questions regarding the topics to which they have contributed.  I would like to conclude this very brief history by emphasizing that these seminal, revolutionary visions of the future have only grown in perceived plausibility and proximity.  Frankenstein is constantly barging in on discussions of bioethics and genetic engineering.  Moore's Law makes Artificial Intelligence more feasible every day.  Nanotechnology already contributes to many commercial products.  The predictive horizon is shrinking, and the lips of today's futurists today are buzzing with a slogan worthy of a sandwich-board prophet:  "The Singularity is Near!"

[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe