www.mitchellhowe.com


The Singularity Q&A

Q: How would neural interfaces work?

A: Neural interfacing refers to direct, useful connections between brains (neurons) and computers (transistors.)  It is also referred to as BCI (brain-computer interfacing).  It is much more complicated than it sounds.

Neurons and transistors have almost nothing in common.  They both communicate using electrical impulses, but neurons do this through chemical reactions rather than external current.  Also, neurons are analog devices while transistors are digital (think of a mechanical clock based on gears vs. a digital clock based on the vibrating frequency of a quarts crystal in an electric current).  Neurons can be directly attached to over a thousand other neurons, deciding to "fire" an electric charge when the combined electric input sent by the neighbors reaches a certain threshold.  Transistors, in contrast, are parts of a single circuit, typically impacting only the component directly ahead of them, like a note passed between people standing in a line.  The sensitivity of neurons is tuned by fluctuating concentrations of the chemicals directly between them, known as neurotransmitters.  Transistors have no comparable tuning feature. 

Neurons and transistors have other important differences, but by now you should at least be able to see why there is a very serious language barrier between them.  When clusters of neurons (like brains) and clusters of transistors (like computers) attempt to communicate, the differences are ever more pronounced; the brain speaks a slow, deep language of thought very unlike the rapid, shallow computations of a computer chip.

So, without some pretty serious translation efforts, about the only thing brains and computers can hope to understand from each other is whether or not they are talking, and perhaps recognize a few different tones of voice.  This is essentially the level of the technology today, a level which allows for some interesting applications and opens routes to further study, but which does nothing that would be directly useful to most people. 

For instance, electrodes attached to the scalp can pick up variations in electrical activity we call brain waves, and the waves can vary depending on what the wearer is thinking about.  There is no way for a computer -- or a scientist -- to read the wearer's actual thoughts based only on those waves, but it is easy to tell when a pattern of waves changes, and to remember what certain patterns look like.  In this manner, an electrode-wearer can teach special software to recognize a few different patterns of waves, and thereby communicate simple commands by thinking different kinds of thoughts.  It is enough to drive a basic menu interface (i.e. "back", "next", "select"), or to play very simple games.  The usage is somewhat slow and tedious, and nowhere near as fast as a keyboard, mouse, or game controller, but for the completely paralyzed, this technology might be the only means of communicating with the outside world.

When the communication is going in the opposite direction -- computer to brain -- more invasive techniques are needed, at least for the time being.  These kinds of connections make use of existing input channels for the brain, particularly those extending from the sensory organs.  The optic nerve is by far the largest, and perhaps best studied, of these input channels, and some limited success has been achieved with so-called "artificial retinas" that convert a video image into electrical impulses sent directly down the optic nerve.  The images transmitted are of extremely low quality; today's artificial transmitters lack the subtlety of the eye's own transmitters, and must become attached to individual neurons through sheer luck or painstaking effort.  Someone seeing through an artificial retina can generally distinguish between light conditions and dark ones, and perhaps identify a few general classes of objects, but can see little else.  Naturally, this level of communication would be useless for someone with functioning eyes, since a computer monitor can already transmit computer information down optic nerves with great efficiency.  But, for the blind, this type of device offers real hope; bad vision is far better than no vision, and the technology is steadily improving.

Indeed, interfacing has excellent long-term prospects.  The next major leap, admittedly a ways off, might allow for two-way communication of simple information, such as individual words or numbers.  The most commonly cited example of this would be an mental calculator.  You might input some numbers and the operation you want performed on them by clearly thinking with the same precision you would use if you were actually pressing the equivalent keys.  The calculator, with it's blazing computer speed, would then return the answer to you, popping it into your head as though you had just heard the number spoken or read it in on a screen.  A more useful variation on this example would be a mental connection to the internet, which would, at the very least, allow you to ask questions with simple answers (as determined by text-analysis software poring over web pages).  What is the capital of Peru?  Will it rain today?  What time is the next showing of Cheesy Sequel III at the local monsterplex?  This level of interface might also allow you to mentally chat with others through existing text messaging programs, creating a kind of poor-man's telepathy.

But the above applications would still, at best, be only about as useful as small wearable computers that should be available by the time this level of interfacing has been achieved.  To get any real advantage out of these neural connections, they would have to operate very quickly.  The theoretical limits on a basic input/output channel would be the same as through your existing senses and organs.  At these speeds, you could mentally "type" about as quickly as you could speak.  You could "listen" about as fast as you can hear someone speak -- or perhaps at the same rate as you can read, which is often faster. 

The ultimate goals of neural interfacing -- the hard-core applications seen in the so-called "cyberpunk" genre of science fiction -- are far more ambitious.  For instance, "virtual reality" might be delivered to your brain by temporarily hijacking all of your sensory input channels and replacing them with computer-generated information; the neural impulses you send to your voluntary muscles would be intercepted as output.  (This is the level of interface portrayed in the movie The Matrix.)  Even more exciting than full mental immersion would be mental augmentation -- literally enhancing the capacity of your mind through connecting portions of it to artificial components that store and process information in a way you can readily access.  

Augmentation could come at multiple levels.  At it's simplest, this could mean instantaneous access to information without needing to think textually about what you want to know.  You wouldn't need to "ask" what time the movie starts; you would already know the time the moment you thought about it.  This level of interface might allow you to add precision to your existing senses; imagine looking at two trees and knowing, if you wanted to at any particular moment, exactly how tall and far apart they are.  (The actual calculations would be performed, based on what you see, by powerful computers, but what you would know would be the results.)  To borrow an example from the fiction of Vernor Vinge, you could explore millions of moves per second in a game of chess, taking advantage of your own abstract reasoning as well as a computer's raw power. 

But there would be an important limitation to this level of augmentation, since your own mind would still be stuck doing the actual internalization of information.  Have you ever read a chapter of a textbook without understanding any of it?  You could have similar problems with "immediate-access" augmentation: not being able to make sense of something simply because you had not yet acquired the necessary associations.  For instance, you might quickly absorb the content of Shakespeare's Hamlet if you spent some time thinking about it, but, prior to this investment of attention, having Hamlet "on-tap" did nothing for you.  As a consequence, you may not have appreciated a bit of satire in which someone converses with a human skull. 

The ultimate level of augmentation, then, would seem to be having all of this information on-tap in a way that was pre-internalized.   With a knowledge base this powerful, you would be a superhuman intelligence by any reasonable measure.  You would be as someone who had read, heard, or viewed -- and understood -- everything ever made accessible to the internet.  (It goes without saying that your knowledge of sex would make Casanova blush.)  Any equipment that could do this with your brain would also, more than likely, allow you to think with all the speed and other performance increases I talked about here

This would be pretty powerful stuff, and an indication that the very nature of intelligence had been identified and reproduced in a machine;  you would be, in effect, a human/AI hybrid.  So, the problems to be solved prior to achieving the strongest types of neural interfacing are actually some of the very some problems that must be solved for Artificial Intelligence.  In all likelihood, this gives AI a considerable leg up (over interfacing) in the race to Singularity;  AI researchers probably don't need to precisely understand the arcane patterns of human thought, nor do they need to design special hardware for linking neurons and transistors.

But neural interfaces may still have an important role to play in the Singularity, especially if AI turns out to be a tougher problem than most researchers believe; the critical breakthroughs in Artificial Intelligence might well be made by scientists taking advantage of neural interfacing.




[Back to Futurism] [Back to Main]  ©2002 by Mitchell Howe