www.mitchellhowe.com


The Singularity Q&A

Q: Can we expect superintelligence to have any of the same values we do?

A: This question seems to imply that all superintelligences will be fundamentally similar to each other -- that there is no chance of one superintelligence sharing our values while another does not.  This speculation is almost certainly false.

It is, perhaps, possible that answers to the biggest questions of the cosmos are etched into the fabric of reality itself, and that any sufficiently intelligent mind would quickly discover them.  But even in this scenario, it seems unreasonable to draw the conclusion that all minds who discover these answers would be compelled to respond to them in the same way.

Look at it this way:  Ants don't seem to have a whole lot of personality (though they undoubtably have some quirks, as ant specialists or ants themselves would be quick to agree).  More intelligent creatures like primates seem to have considerably more individuality, perhaps because their more sophisticated brains create more possibilities for variation within the geneticly determined behavior boundaries set by evolution.  Humans, probably the only creatures on this planet capable of understanding the abstract idea of "values" in the first place, come in even more ideological flavors -- although these, too, are always at least partially rooted in the basic set of adaptations coded into our genes.  Were this not so, we would not be able to debate competing ideas and ethical systems, since we would have no underlying universal values to measure them against.   (Such as logical consistency and rationality; "because he wears blue jeans" is less likely than "because of eyewitnesses A and B, DNA evidence C, and motive D" to convince a human jury of a suspect's guilt in a murder trial.)  Natural selection would have been very unlikely to favor those who could not argue their points using the logic accepted by others, leaving us with a long heredetary legacy of common ground.  So human values, as different as they are, are not, as a rule, nearly as varied as they could be.

But not only are superintelligences going to be far more intelligent than we are now, increasing the range of possibile values, they will probably not be designed by any process like natural selection confining them to a socially accepted standard of logic.  Hence, superintelligences will, if anything, be at lesat capable of differing from each other to a far greater extent than we humans can differ from each other, so the fact that a mind is superintelligent doesn't neccessarily say anything at all about the values it will hold.

That said, there is also good reason to suspect that superintelligence would be at least "inclined" to strive towards the same underlying logic that we, at our "best", seem to strive for.  Specifically, the same type of reasoning that is most likely to suggest to a thoughtful jury that a suspect did or did not murder is also the same type of reasoning that is most likely to actually find the real killer.  This "correspondence theory" of truth is the one that ultimately allows us to function in this universe, and would presumably be required of any superintelligence that also intends to function here.  Hence, if the greater intelligence of a functioning mind suggests anything at all about it's values, it is that logic and reason will probably be important, and implimented to a more consistent degree that we humans manage to do (there are solid evolutionary reasons why we are so imperfectly rational, which will come up in other questions.)  So, we might say, at least in this respect, that superintelligences are more likely to be reflections of our better selves than of our ugly ones.

This is certainly not to say that we have nothing to worry about, only that there is no prima facie reason to fear a mind merely because it is superintelligent.




[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe