www.mitchellhowe.com


The Singularity Q&A

Q: What will it take to create Artificial Intelligence?

A: Snips, snails, puppy dog tails, sugar, spice, and everything nice. . . these things have nothing to do with Artificial Intelligence (well, maybe that last one does, but we're not discussing AI "Friendliness" theory just yet.)   I open this way because some will find this response noteworthy for what it does not include.  Such readers are advised to look for follow-up questions where I explain the reasoning behind my more prominent omissions. 

I cannot say exactly what a successful AI design will look like.  If I could, I would hand it off to some responsible programmers and we would reach the Singularity that much sooner.  But I can give a short list of qualifications for any engineer or team hoping to succeed at creating genuine Artificial Intelligence:

As a basic entry-level requirement, AI designers must have a good idea of how general, multi-purpose intelligence works.  Since the human brain is currently the only functioning example we have access to, it is hard to imagine that anyone will solve the problem of intelligence without a solid grounding in the cognitive sciences.

This should not imply that truly alien intelligence having no resemblance to human minds could not be possible.  On the contrary, human intelligence is probably an unremarkable example of intelligence in general, saddled with all manner of unessential baggage leftover from evolution.   Another important AI-design skill, then, is the ability to pick out those aspects of human minds necessary to general intelligence.  Evolutionary psychology (Q&A entry forthcoming), which looks at the behavior of our mind from the vantage point of evolution, is a particularly powerful tool for acquiring this ability.

But the selective student of the mind must, at the same time, be very wary of leaving out anything important.  The brain uses a great deal of redundancy in its low-level design -- losing a few neurons here and there will not cause it to break down.  But the loss of an entire design feature -- such as memory formation -- would be catastrophic.  An brain or AI in this situation would not be 90% functional for lacking 10% of its core design, any more than your car would be 90% functional if it were missing 10% of its engine.  It would be 0% functional, or something very close to it.  "Lemon" AI software could still be useful for other things -- like a derelict car used for target practice -- but not for general intelligence.  An AI designer, then, must be able to bring all of the critical pieces of mind together from the beginning; brain-dead AI is unlikely to offer any suggestions as to what it is missing.

Adding to the challenge, artificial intelligence will rely on hardware far different from that of the human brain, since, for the foreseeable future, computers built from transistors etched in semiconductors will be the only platform available for AI development.  As discussed in an earlier response, transistors and neurons are about as closely related as field mice are to the mouse on your desktop.  Thanks to the work of computer pioneer Alan Turing, we know that pretty much anything computable in one kind of machine can be computed in another -- given enough time/speed and memory -- and it is generally believed, although not yet proven, that the brain is "Turing computable."   But interpreting "wetware" brain features as functionally equivalent digital software is sure to be a great challenge for AI designers.  A strong understanding of mathematics and complex systems will probably be needed for success in this area.

Paradoxically, one of the hardest things about designing AI may be initially recognizing success.  A baby AI will enter the world knowing much less than a human baby, and will presumably appear even less intelligent for it.  Humans, contrary to some outdated (though still popular) notions, are born pre-configured to easily learn a wide array of essential -- yet highly complicated -- skills.  Fluency in language, for example, is acquired by children at a prodigious rate defying any other explanation.  So, before programmers could even teach an AI to understand human languages, they will have to teach it how to learn language in the first place.  This is just one example of why successful AI designers, taking little for granted, will not be able to rely on many existing educational materials, and must be able to create curriculum specialized for the unique needs of an artificial mind.

Moving on to the obvious, an AI development team must have adequate funding, both to support themselves through the completion of the design and to obtain the hardware to run it on.  Without a detailed design summary, it is nearly impossible to guess how much computing power will be needed to run AI at a recognizably intelligent speed.  But judging from the life-cycles of most other software, the power needed to run the first prototypes will be considerably greater than eventually required by optimized, debugged versions.  Supercomputers may well be needed, although Moore's Law ensures that the cost of computing power cannot remain a bottleneck in an AI program forever -- if it even is now.

Finally, designers must have a safe, coherent goal system in mind that will allow AI to productively interact with its environment -- especially the 6 billion other minds already on the planet.  A functional goal system is probably essential to get an AI running in the first place, but any responsible engineer will want to make sure that these goals will not threaten humanity when the new mind becomes intelligent enough to actually achieve them.  It is in the granting that wishes can become dangerous.

Of course, anyone can try their hand at AI without meeting these requirements (although I really hope they'll at least mind that last one).  But they are about as likely to succeed at creating artificial intelligence as whoever is going around chasing those poor puppy dogs with scissors.




[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe