The Singularity Q&A

Q: Is genetic engineering the only way for us to keep up with our technology?

A: I include this question because renowned physicist Stephen Hawking has actually made the argument that humans should increase their intelligence via genetic engineering in order to maintain their edge over their soon-to-be-intelligent machines.  He has also suggested neural interfacing as a means of ensuring that machine intelligence enhances, rather than opposes, human intelligence.  Hawking's suggestions fail to pass muster on several important counts.

First, as discussed earlier, genetic engineering, as it is likely to stand for the foreseeable future, is an extremely slow process, with each progressive step slaved to the speed at which the organism can develop from a fertilized cell to maturity.  In the case of humans and their brains, this would take many years per cycle.  Artificial Intelligence, on the other hand, would have no comparable delays once it becomes capable of redesigning itself for greater intelligence.  Computers can be manufactured in hours and reprogrammed in seconds.  By the time a single human generation can be tweaked for higher intelligence, AI could literally have passed through thousands of generations.  In short, if it ever actually becomes a contest -- man vs. machine -- it will be no contest at all.

Will it come to this?  It certainly doesn't need to, and if there ever turns out to be a Terminator-style war, it will probably be the humans who start and prosecute it.  Why?  All humans, including Hawking, have a genetically inherited behavioral tendency to regard those with obvious differences as threats.  (In the unfeeling context of natural selection, this actually makes sense; those who are visibly different from you probably do not share as many of your genes as those from your own kin groups -- the reproductive success of the outsiders' genes could therefore come at the expense of your own.)  Artificial Intelligence, on the other hand, would have no such evolutionary legacy distorting its perspective on interpersonal relationships. 

This is not to say that humans cannot rise above their inclinations and treat others with respect; they do so daily.  This is also not to say that AI could not be engineered in a manner that would be threatening to mankind; it certainly could.  But the only seeds for conflict that have already been planted lie within us, not the machines.  It would be in our best interest to broaden our circle of acceptance to all intelligent beings, lest we find ourselves irrationally lashing out at those who are likely to be our most powerful and faithful allies in the greater war against suffering and stagnation.

Lastly, Hawking's suggestion regarding neural interfaces makes a certain sense, but given the advantages pure machine intelligence would have over biological minds, it does not seem likely that intelligence augmentation through neural interfacing could have the desired effect of keeping the human portion of the mind in the driver's seat.  If AI becomes millions of times more intelligent than humans, which is actually the likely scenario, an augmented human's intelligence would have to be be more than 99.9999% artificial to "keep up" with AI.  To such a person, the human brain would be a long-obsolete, even dangerous hindrance -- not unlike the human appendix.

But this need not be cause for alarm.  For those who do not find the thought of mind/machine mergers and upgrades abhorrent -- and we can expect that attitudes will change over time -- this type of evolution is probably not only inevitable, but highly desirable.  Humans need not worry about staying in control of the future when they can become the future.

[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe