The Singularity Q&A

Q: What were some important programs in the history of real AI research, and why do AI projects always seem so overhyped?

A: While I refer to these programs in the past tense to help distinguish them from programs very much in use today, versions of old AI software can sometimes still be found and run by those with enough time, training, or enthusiasm.

Here are a few I have chosen to highlight:

AM (Automated Mathematician)

In the mid 1970’s, Douglas Lenat made became one of the leading figures in AI by creating programs like AM. Essentially an engine for formalized logical deduction, AM was programmed with certain basic rules about mathematics and heuristics – the “rules of thumb” used by humans and computers alike to learn and act – for finding other rules. It then proceeded to quickly “rediscover” many of the fundamental theorems in mathematics, including some that were considered important achievements in the times human mathematicians first described them.

AM was an important pioneer in the field of automated logical reasoning, and the ability to build rules from existing data is an essential feature of today’s more advanced expert systems.


Taking the rule-building talent of AM to the next logical step, Lenat assembled the complex, temperamental Eurisko – a program that could actually make heuristics about heuristics– rules about making rules. Essentially the first program that could learn how to learn, Eurisko is considered by some to have been the first credible attempt at an artificial mind. Eurisko’s great freedom to alter itself proved to be its greatest shortcoming, however, since it lacked the “common sense” knowledge to avoid making decisions that were clearly (to humans) stupid or self-defeating. Nevertheless, Eurisko was vindicated as a powerfully adaptive tool when combined with a human’s common sense. In 1981 and 1982, Lenat used Eurisko to hustle an annual role-playing competition in which players designed fleets of intergalactic warships, trouncing the competition by using previously unexploited loopholes and unorthodox strategies initially derided by other human players. (He did not enter in 1983 when the organizers threatened to cancel the competition.) Eurisko also provided an important contribution to the design of sophisticated computer chips.

Eurisko no longer exists today in any usable, stand-alone form, but was a powerful eye-opener to the possibilities of human-AI cooperation in the immediate future. Lenat went on to lead Cycorp, an ongoing AI project that is attempting to overcome the limitations of programs like Eurisko through massive libraries of common sense knowledge.


A cooperative endeavor between Douglas Hofstadter and Melanie Mitchell, Copycat was intended to demonstrate how AI might be able to make decisions by drawing analogies. As a small, research oriented project, Copycat was restricted to working with analogies of the alphabetic variety, and accepted problems like, “If abc changes to abd, what is the analogous change to kkjjii?” Problems like these frequently have more than one right answer, and more than one method by which a right answer can be determined. Hence, copycat used many different computational elements (subprograms that are also called “agents”) that would attack the problem in different ways and evaluate the elegance or “satisfaction” of a given answer. Without any central control, those agents coming up with the best answers would naturally rise to the forefront and be the ones to give the final answers. And, similar to a panel of human judges, the “dissenting opinions” of the other agents would always remain under the surface.

The ability to draw analogies will likely be essential to any real AI, even if the eventual method ends up being different. The multi-agent aspect of Copycat is found in some of today’s narrow-AI programs and real AI design concepts.

Now, as to the inevitable overhype in newly announced or released AI programs, a big part of the answer can be deduced through simple arithmetic. As my next response will help to demonstrate, the number of researchers currently engaged in projects that actually intend to create real AI is very small. In fact, news reporters covering tech beats that include AI probably outnumber researchers by at least an order of magnitude. Throw in the arcane complexity of the actual software, combined with a latent public excitement for the seemingly fantastic possibility of real AI, and you have a perfect recipe for sensational hyperbole.

Consider a reporter who, after yet another six month draught of AI news, gets a lead on a new project exploring the use of “neural nets” to store information in a more accessible way. The actual mechanics of a neural net are pretty complicated, but are loosely inspired by the way neurons work together in a brain. When this reporter’s story is published in a pop-sci context, it will therefore talk about an organization “like that in the human brain”, and will probably have a title like “Brain 2.0”. Enthusiastic readers will get the understandable impression that this project is not only attempting to create real AI, but has unlocked the secret for doing so – even though it has done nothing of the sort.

Many who might otherwise have been AI’s biggest fans have undoubtably become its most obstinate critics as a result of this process, and researchers are done the double disservice of misrepresentation and water-muddying that makes it so difficult for anyone who still takes real AI seriously to find – and fund – the most important projects.

[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe