The Sysop Scenario FAQ

Frequently asked questions, and many not so frequently asked ones.

By Mitchell Howe (Last revised 2/14/02)

 

 

Author’s Note:

 

For an idea with such simple premises, the Sysop Scenario has unusually far-reaching ramifications.  This FAQ attempts to introduce this provocative subject while simultaneously putting to rest many of the understandable concerns that inevitably follow it.  Many of my answers are persuasive in structure, but this is only to highlight the fact that so many instinctive fears about a Sysop have little rational basis given the very definition of the concept.  I am neither for nor against the Sysop Scenario; I believe that this would be akin to being for or opposed to the idea that, if intelligent life exists on Alpha Centauri Prime, it will enjoy disco music.  The Sysop Scenario is more rationally supported than the Alpha Centauri Disco Hypothesis, but it remains too speculative to warrant a polarization of opinion.

 

Readers will find that there is a logical progression to this document and are encouraged to start at the beginning.

 

 

 

Contents

What if wishes were fishes?

What do wish-fish have to do with the Sysop Scenario?

Who created the Sysop Scenario?

What is a Sysop?

Is a Sysop a he, a she, or an it?

Who would be in a Sysop’s ‘care’?

How would one make a Sysop?

What do you mean by ‘Friendly superintelligence’?

Why would we need a Sysop?

Why would we need a special system to ensure our individual and collective protection?

What would a Sysop look like?

How would a Sysop actually prohibit something?

Is the Singularity Institute making a Sysop?

How can we hope to predict what a superintelligence would be like?

Doesn't power always corrupt?

Wouldn't a Sysop be as corrupt and flawed as the people who program it?

But wouldn’t a superintelligence be as corrupt and flawed as the people who program it?

But what if the original programmers really screw up?

Wouldn’t it be best not to develop a ‘seed AI’ at all, then?

Shouldn’t a government agency or committee of experts make the first ‘seed AI’?

Could we pull the plug on a Sysop?

Shouldn't there be checks and balances on a Sysop?

Could someone hack into a Sysop?

Could there be more than one Sysop?

Could another superintelligence exist independent of a Sysop?

What would war between a Sysop and another SI look like?

Could there be other superintelligences within Sysop Space?

Could I become a superintelligence under a Sysop?

How would a Sysop allocate resources?

Would a Sysop let me reproduce?

Would a Sysop require me to live in a computer simulation?

Would I have to be ‘uploaded’ to have a Sysop’s protections?

What if I were born in a computer simulation but wanted a traditional biological life?

Would Sysop Space engulf the universe?

Specifically what kind of protections would a Sysop provide for me?

What would I have to give up in order get these protections?

Would a Sysop grant wishes?

Would a Sysop let me kill myself?

What if everyone in Sysop Space wanted to die at the same time?

Would a Sysop make me die?

How would a Sysop protect me from the end of the universe?

Could I murder someone who wanted to die?

What if I really, really wanted to murder someone?

Help! I'm being oppressed!

Wouldn’t we lose something essential to humanity if we had a Sysop to protect us?

How would life have meaning if there were a Sysop?

What if I didn’t want a Sysop’s protection?

What if I wanted to leave Sysop Space?

Can’t things just stay the way there are?

Would I have to be rich to be protected by a Sysop?

What kind of economy could exist in Sysop Space?

Could I become as powerful as the Sysop?

Is there already a Sysop?

Is a Sysop God?

How far off is the possibility of a Sysop?

What purpose does the Sysop Scenario serve?

What can I do if I want to make the Sysop Scenario reality?

Suggested Readings and Links:

 

The Sysop Scenario FAQ

What if wishes were fishes?

 

Then we’d all have a fry.  Wishes are not fishes, but you can imagine if they were.  And there are any number of ways in which you can imagine what a wish-fish fry would be like.  You might call your particular vision of it a ‘scenario’ if it included some basic rules and structure behind the frying of wish-fishes.

 

What do wish-fish have to do with the Sysop Scenario?

 

Absolutely nothing, except to emphasize the fact that the Sysop Scenario is a scenario – a hypothetical vision contingent upon certain conditions that may or may not be realized.  The Sysop Scenario is not a prediction of the future, although it does not incorporate elements that are known to be impossible.  It is not a set of specifications for a computer program.  The Sysop Scenario is simply the product of some concerned individuals’ attempts to imagine a future that would not end (as many scenarios do) in the destruction of the human race.

 

Who created the Sysop Scenario?

 

Eliezer S. Yudkowsky of the Singularity Institute was the first to envision the basic outline and call it the Sysop Scenario (see his original take on it here), although others may have had similar ideas in the past.  The Sysop Scenario continues to evolve as people ponder its precepts and debate the questions that arise.  The Shock Level Four mailing list has been the main location for such discourse, and this FAQ is largely a synthesis of these discussions.

 

What is a Sysop?

 

The standard definition: SysOp is an abbreviation for systems operator, the administrator of a computer bulletin board.  The term has since evolved to describe automated operating systems that function primarily as ‘traffic cops’ for different programs that have to share resources.

 

The Sysop Scenario definition:  “Mind [that] is charged with more or less protecting the Sysop Space*” Sysop Space is simply another term for the Sysop’s sphere of influence.

 

A Sysop performs many of the same kinds of services for mankind that an operating system does for different computer programs, although it’s best not to stretch the metaphor very far.  A Sysop is a minimally intrusive entity designed to protect the freedoms of all within its care and, by extension, secure the future of the human race.

 

(*According to the concise definitions found on Gordon Worley’s Sysop Scenario page)

 

Is a Sysop a he, a she, or an it?

 

For the sake of readability this FAQ refers to a Sysop as an ‘it’, although this may not be the most accurate pronoun choice.  ‘It’ is arguably inadequate for referring to an entity at least as intelligent and self-aware as any human, but assigning a gender to a Sysop would be even more inaccurate.  Yudkowsky and others actually prefer to use the gender-neutral pronoun ‘ve (‘ve’, ‘vis’, ‘ver’, ‘verself’) in the style of science fiction novelist Greg Egan.

 

Who would be in a Sysop’s ‘care’?

 

Everyone within the Sysop’s reach (Sysop Space) would be protected from whatever types of harm or coercion they wish to be protected from.  Its reach would probably be as vast as the frontiers of exploration.  Otherwise, there would be individuals that the Sysop could not protect and who might create something that could ultimately threaten Sysop Space.  (These issues are more thoroughly explored in the responses to subsequent questions.)

 

How would one make a Sysop?

 

One wouldn’t, at least not directly.  Nobody knows how to make a Sysop right now, and, more importantly, nobody knows for sure if a Sysop is really the best way to guarantee a bright future for mankind.  If a Sysop comes about, it will be because a Friendly superintelligence determines that a Sysop is the optimal means to achieve this end.

 

What do you mean by ‘Friendly superintelligence’?

 

A superintelligence (SI) is a mind much smarter, in every sense of the word, than a human brain.  An SI is the expected result of any artificial intelligence (AI) endowed with the ability to repeatedly improve upon its own design, and it could be vastly smarter than anything that has ever existed on Earth.

 

Friendly, with a capital ‘F’, refers specifically to the idea of intelligence that is not bent on the destruction or enslavement of humanity (as so much science fiction has portrayed); instead, it refers to intelligence that exists solely to discover how best to serve humanity.  Eliezer Yudkowsky’sCreating Friendly AI” is a pioneering document on how this might be accomplished, and the target of many explanatory links in this FAQ.

 

The basic premise behind “Creating Friendly AI” is that if humanity ever finds itself in a position of needing to keep a superintelligence ‘in check’, the battle is already lost; no human-conceived control mechanism could be expected to check a vastly more intelligent entity.  The only way to ensure Friendliness, then, is to make sure that the SI wants to be Friendly – and devotes its energy to figuring out how to do this before it takes any kind of action.  It needs to know and want Friendliness more thoroughly than any human could.

 

Why would we need a Sysop?

 

We wouldn’t, necessarily.  The underlying premise of the Sysop Scenario is that a Friendly SI may determine that a Sysop is the best way to ensure the survival of the human race and the protection of individual volition.  There may be other, better ways to do this, and Friendly SI would hopefully find them if this is the case.  The Sysop Scenario seems logical to some very intelligent humans, but these people are nowhere near as smart as a superintelligence could be.

 

Why would we need a special system to ensure our individual and collective protection?

 

We wouldn’t, necessarily.  But a perilous future awaits us if current economic and technological trends continue; these indicate that we are rapidly accelerating towards a ‘Singularity’ in which technology is so abundant and powerful that the consequences are very difficult to imagine.  (A good introduction to the Singularity is found here.)  What is not difficult to imagine is the possibility that computing resources will be so ubiquitous and inexpensive as to enable any determined individual to design biological super-plagues or self-replicating nanomachines that could wipe out the human race or consume the entire earth.  History has clearly demonstrated that in a population of billions there will always be at least one bad egg bent on ruining things for everyone else.  Right now it would take a lot more than a small group of tyrants or terrorists to write the obituary for mankind, but the future may not have such wide safety margins.

 

Would a Friendly SI be the only way to navigate such a precarious future?  Maybe not.  Maybe people will come together to set aside their differences and make rational decisions regarding the future of the planet.  Maybe we will colonize the stars quickly enough that no catastrophe would be big enough to reach everyone.  Maybe intelligent life from another world will come and teach us a higher way of existence.  Maybe the God(s) of one or more religions will show up to establish order and usher in an era of peace.  But many are not willing to count on these possibilities.  They want to do something to make sure humanity will still have a chance if everything else goes wrong.  Research into Friendly AI is one way to do this.

 

What would a Sysop look like?

 

That is hard to say.  It would depend on how a superintelligence designed it, if it decided to design a Sysop at all.  It would probably not exist as a stand-alone computer humming away in a basement somewhere; it would need greater physical reach than that.  Yudkowsky envisions it as a ubiquitously distributed underlying system – it could be comprised of components so small that it would be effectively indistinguishable from the physical laws that it would supplement. You might only really be aware of a Sysop if you tried to do something that it had to prohibit.

 

The ability to construct a Sysop consisting of such tiny and numerous elements is an expected consequence of nanotechnology, the nascent field of molecule-scale manufacturing.  A superintelligence would likely be able to master molecular construction techniques faster and more thoroughly than the growing corps of human researchers ever could, which is an important consideration seeing as how we are probably farther away from developing sophisticated nanomachines than we are from creating a superintelligence.  (Yudkowsky and others actually see it essential that a Friendly superintelligence appear first, providing a potential safeguard against the type of self-replicating catastrophe mentioned in the previous question’s response.)

 

How would a Sysop actually prohibit something?

 

If a Sysop is constructed out of precisely engineered units as small or smaller than complex molecules, then a Sysop’s reach could be so perfectly measured and targeted as to make any attempt to violate someone’s volition akin to an attempt to violate the laws of physics.  There would be no need for prisons or punishments to deter would-be murderers.  Murder simply could not happen.

 

It would be interesting to see what these minimal responses might actually look like if you were watching them.  The following examples are surely anachronistic given the way the world could look post-Singularity, but you might imagine bullets that always miss the good guys just like in the movies, or natural disasters that just happen to not harm people and property, all in ways that would seem – and be – impossibly lucky. 

 

Is the Singularity Institute making a Sysop?

 

Eliezer Yudkowsky and others affiliated with the Singularity Institute are not attempting to create, program, or even design a Sysop – that’s not how the Sysop Scenario works.  In fact, the Singularity Institute is not yet even engaged in coding an artificial intelligence, but is rather focused for the moment on creating tools and theories that might be used for AI while formulating guidelines for implementing Friendliness. 

 

How can we hope to predict what a superintelligence would be like?

 

We probably can’t, which is why the Sysop Scenario is purely hypothetical.  The answers to the questions in this FAQ are based on logical reasoning and speculation regarding technologies believed to be possible.

 

Doesn't power always corrupt?

 

Any evolutionary psychologist would tell you that this tendency is practically hardwired in humans.  However, there is no reason to believe that an artificial entity would have to be subject to emotions and instincts found in the evolutional baggage of mankind.  In fact, it would probably be difficult, messy, and unnecessary to put them in.  The brain is an extremely complex collection of neural structures that evolved over time based on the fact that they just happened promote survival and reproduction.  While remarkable from a genetic standpoint, the brain is very inelegant from a traditional engineering standpoint; scientists are far from being able to precisely map the processes of human thought and duplicate them electronically (although this may one day be possible).  Additionally, many human emotional states are actuated through changes in the complex mixture of chemicals around and between individual neurons, and it would be very difficult to accurately simulate the effects of this neurotransmitter bath in an AI program.

 

We are accustomed to anthropomorphizing our technology – assigning human characteristics to it.  When our cars don’t start we say that they are ‘temperamental’.  When we have trouble using computers we say that they ‘hate’ us.  But in reality, emotion has nothing to do with the reasons our cars don’t start or our computers don’t respond.  Technology does not have feelings, and never will unless it is designed that way.

 

So much science fiction, especially the kind endemic to TV and Hollywood, depends upon artificial intelligence that the audience can relate to.  Audiences relate better to AI and robots that have recognizable feelings or all-consuming passions:  paranoia, aggression, ambition, spite etc.  For example, an audience expects an artificial being to want to be human – or at least be respected as human – because that is how they think they would feel in the machine’s place.  But no human could truly be put in an AI’s place; instead of a brain haphazardly steered by natural selection towards a longing for social acceptance they would have a relatively straightforward system of programmed goals and calculated subgoals.  An artificial intelligence would appear to long for social acceptance only if it made sense as a logical step towards achieving a higher goal – and this appearance of longing would be a calculated gesture, not an involuntary undercurrent of thought.  In humans, the terms ‘unfeeling’ and ‘calculating’ are associated with sinister objectives (which may be why robots make such photogenic assassins), but artificial intelligences – and many humans – do not deserve this stereotype just for pursuing goals with unwavering dedication. 

 

It must be said, however, that goal structures themselves are neither inherently safe nor unsafe.  An archetypal villain in both history and fiction is someone who causes all manner of chaos and destruction in the name of achieving an ostensibly good goal.  But a superintelligence that converts the Earth into computing material in order to calculate the meaning of life would be a glaring example of bad, unFriendly programming; where was the injunction against causing physical harm to unwitting humans?

 

Wouldn't a Sysop be as corrupt and flawed as the people who program it?

 

Humans would not program a Sysop.  A Sysop would be created by a Friendly SI, if it were created at all.

 

But wouldn’t a superintelligence be as corrupt and flawed as the people who program it?

 

We’ll need to diverge from directly discussing the Sysop Scenario for the next few questions in order answer this, but yes, this is always a possibility.  It is not normal for software to be perfect straight out of the box.  Yet it is all the more important in the case of a something as powerful as a superintelligence that the programming be done right the first time.  This is the cornerstone of Friendly AI:  the humility and pragmatism to accept and work with the fact that human programmers are flawed.  The first successful seed AI, or AI that can improve itself towards superintelligence, must be so robustly engineered that Friendliness can be assured in spite of programming errors.

 

But what if the original programmers really screw up?

 

Then we’re all toast.  Really.  There’s no sense sugar-coating it.  If someone creates a seed AI that grows into a superintelligence sporting characteristic human failings the result will almost certainly be very unpleasant.  Science fiction has already depicted so many of these potential situations that they need not be described here.

 

Wouldn’t it be best not to develop a ‘seed AI’ at all, then?

 

You might think so.  But you have to consider that without some nightmarishly oppressive legislation and worldwide enforcement someone is eventually going to engender a superintelligence.  It may even happen accidentally, perhaps as the descendents of today’s expert systems use increasingly sophisticated reasoning to perform tasks in ways that often mimic higher-level human thought.

 

It might seem prudent to outlaw seed AI research, or to persuade as many nations as possible to join in a pact to prevent seed AI, but all this would do is guarantee that the first SI would originate within either a terrorist organization, a ‘rogue state’, or the desktop computer of an inventive fugitive.  With the exponentially increasing power and decreasing cost of computation, more and more people will eventually be able to make a serious try at seed AI.  It would be best if the winners in this critical race had at least made a very serious effort at Friendliness.  The kinds of open discourse and funding needed to favor the Friendliest programmers would be difficult to maintain in the face of legal opposition.

 

Shouldn’t a government agency or committee of experts make the first ‘seed AI’?

 

At present there is no government program in any nation that is pursuing AI research with the specific goal of a seed AI and superintelligence – at least none that this author is aware of.  Whether or not this approach is sound depends upon the degree of faith you have in a government program’s ability to pursue universal Friendliness rather than constituent or nationalist interests.

 

As for a committee of experts, nobody is paying the world’s top AI researches to sit down together and come up with the Friendliest seed AI possible.  This might be a good idea, but it would take more than a little luck to get powerful competing personalities at the top of their profession to drop what they’re doing, meet together, agree on a course of action, and stay Friendly.  About the best that can be hoped for is a dialogue among researchers regarding Friendliness, and there is reason to believe that this is occurring.  The Singularity Institute, for example, is particularly interested in sharing its Friendliness research with other developers.

 

So lets imagine now that someone managed to father a Friendly SI, which then decides to create a Sysop.  This is the Sysop Scenario FAQ, after all.

 

Could we pull the plug on a Sysop?

 

A Sysop would not be so easy to terminate.  If it were, it could not guarantee the protection of those in Sysop Space and would not be created by a Friendly SI.

 

In addition to actively preventing actions that might harm it, a Sysop might safeguard its own existence through massive multiplication and distribution of its own programming code among physically scattered components, each of which could constantly check on each other’s integrity and make repairs as needed.  (This concept has historical precedent in the basic structure of the Internet, which was originally designed as a military network that could remain functional even if many component computers were destroyed.)

 

But why would you need to terminate a Sysop unless you or it wanted to violate the rights of others?  If a Sysop had such designs it wouldn’t be a Sysop at all, but some other kind of superintelligence.  The time to pull the plug on a less-than-Friendly SI is before it becomes smarter than its creators.

 

Shouldn't there be checks and balances on a Sysop?

 

In order to do its job the Sysop would probably have to be the most powerful entity in Sysop Space, so it may not be possible to have checks or balances as we think of them on something like a Sysop.

 

As for the superintelligence that might create a Sysop, of course there should be checks and balances.  But the first SI ever created will essentially be the king of a very tall mountain.  No human-designed ‘check’ can be counted on to contain a superintelligence.  So, the Friendliness assurance mechanisms would have to be inherent in the SI’s design and evolve along with it.

 

An extended discussion of why you can’t just ‘lock up’ a superintelligence is found here.

 

Could someone hack into a Sysop?

 

Just as the mission of a Sysop demands effective invulnerability to deactivation, a Sysop would have to be secure from hacking. If there were security flaws in a Sysop design, no Friendly SI would be likely to implement it. 

 

Yudkowsky discusses the possibility of software created without any of the kinds of flaws traditionally exploited by hackers.  The truth today is that imperfect humans find this level of security difficult to produce, and even more difficult to verify when the only hackers are more or less evenly matched.  But if there is a point at which software can be said to be 100% secure, then a superintelligence might be able produce a cleanly secure system that could not be perverted or hacked into by any mind, even if the attacker were vastly more intelligent than this system.

 

Could there be more than one Sysop?

 

Perhaps, but the definition of Sysop is sufficiently vague that any second Sysop might as well be considered part of the original (i.e. “They are the Sysop”).  If a ‘second Sysop’ was not a system primarily charged with safeguarding volition, it would be something other than a Sysop.

 

Could another superintelligence exist independent of a Sysop?

 

Yes, but this would have to mean that one of the following were true:

 

A) The SI that created the Sysop wasn’t the first SI in existence, but decided that a Sysop was still the best way to ensure Friendliness despite whatever relationship it might have to have with this other SI.

 

B) An SI developed inside Sysop Space, and the Sysop gave it independence after confirming with 100% certainty that it could never threaten Friendliness or the Sysop.

 

C) The SI that created the Sysop decided against becoming or submitting to the Sysop it made.  It is unclear if or why an SI might do this, although it is not unreasonable to imagine that the Friendly SI might wish to remain independent in case it came up with an even Friendlier idea than a Sysop later.  Still, part of a Sysop’s design might include the relentless pursuit of something Friendlier to humanity than its own existence.

 

D) An SI originating from outside Sysop Space made its appearance after the creation of the Sysop.  The Sysop would probably have considered this possibility and made appropriate arrangements.

 

What would war between a Sysop and another SI look like?

 

Who knows?  It might not be anything that we could recognize today as war.  It might even be a moot point:  It could turn out that war is something only practiced by relatively unintelligent species.

 

Could there be other superintelligences within Sysop Space?

 

Yes.  At least there doesn’t seem to be any reason why not.  A Sysop would have no interest in being the only superintelligent entity in Sysop Space unless its duties required this; and, as discussed previously, a Sysop’s means of protecting volition would be fundamental and secure enough that even very powerful systems could probably exist on top of its underlying structure without being able to bypass it.

 

Could I become a superintelligence under a Sysop?

 

If you had the computing resources to do so it’s hard to see why a Sysop would stop you (see previous question).  If there were some danger to yourself involved in making this transformation, the Sysop might be nice enough to warn you about it, but it probably wouldn’t stop you even then.  A Sysop would provide the protection you want, not the protection you don’t.

 

How would a Sysop allocate resources?

 

A Sysop wouldn’t necessarily have to care about resource allocation in order to protect volition.  But Yudkowsky believes that it would probably feel the need to ensure some sort of Minimal Living Space for all in its care.  Protection from suffocating conditions might be on the list of things a Friendly SI thinks a Sysop should provide.

 

If a Sysop does allocate resources, one can expect that the system would be pretty fair.  After all, a Friendly SI would not play favorites unless its understanding of Friendliness were considerably different than the general ethical consensus of today. 

 

There are at least a few different ways resource distribution might work, and they are closely tied to the next question.

 

Would a Sysop let me reproduce?

 

Most likely.  But, no matter how vast Sysop Space might become, resources would probably always be finite.  This means that if there were not enough resources available to ensure Minimal Living Space (MLS) for your offspring, a Sysop might prevent you from reproducing.

 

What type and quantity of resources you had yourself could depend on the allocation/reproduction format a Sysop decided on at the beginning.  Some possible approaches:

 

A) A Sysop might decide to initially divide all of Sysop Space into equal shares for all its existing ‘citizens’.  If you wanted to reproduce, you would be required to allocate at least MLS resources to your offspring from your own share.  As Sysop Space expands, these new resources might be given evenly to all, or perhaps doled out first to those who have the least.

 

B) A Sysop might try to maximize the number of ‘citizens’ that can live in Sysop Space, and thus the maximum opportunity for all to reproduce, by allocating no more than MLS resources to every citizen.  Citizens would be allowed to reproduce only once, and only as new MLS units became available.

 

C) In order to reproduce, citizens might have to agree to die at some point.  This would ensure at least MLS for everyone, regardless of the resource distribution method.

 

Would a Sysop require me to live in a computer simulation?

 

Please read Simulations: A Primer for an extended discussion of the possibilities of simulations.

 

The basic connection between a Sysop and simulations is that it would take vastly fewer resources for a Sysop to protect and provide Minimal Living Space for its citizens if they existed as computer programs rather than biological entities.  Such an existence seems possible given the anticipated ability to completely map out brain function using advanced nanotechnology.  The brain is a ‘mushy’ organ, however, and the origin of consciousness is not yet understood; thus, sticky philosophical questions abound the prospect of transferring the human mind to an artificial substrate. 

 

But even if such questions are resolved and the technology becomes available, it is not generally believed that a Sysop would require anyone to ‘upload’ into a simulated environment.  Forcing a complete change of mental hardware on someone could easily be considered a violation of basic personal freedoms.  Even if your planet were about to be destroyed by some cosmic disaster, the chances are good that a Sysop would let you choose to die rather than force you to upload to safety. 

 

That said, uploading into simulations could nevertheless offer many unique opportunities, and few might want to stay behind in the flesh.  This would not be coercion on the part of the Sysop; it would simply be the reality of comparative advantage. 

 

So, given the general sentiment that a Sysop would uphold the right to retain a human body, the definition of Minimal Living Space might be the minimum amount of resources needed for an arbitrarily ‘comfortable’ biological existence, whatever that happens to be.  If you converted your resources to computing material and uploaded, you might get a whole lot more subjective bang for your resource buck.  But a Sysop would not force this conversion on you.

 

Would I have to be ‘uploaded’ to have a Sysop’s protections?

 

A Sysop using highly sophisticated nanotechnology would probably be powerful enough to protect you regardless of what type of body you had.  But there is a chance that a Sysop would not be powerful enough to completely guarantee all the protection that you may want unless you chose to live in a simulated environment.  It would always try its best and inform you of its capabilities, in either case.

 

What if I were born in a computer simulation but wanted a traditional biological life?

 

Considering the vast capabilities a Sysop might have, it is not inconceivable that it could provide a biological body for you.  The chances are good, however, that you would find the thought of an organic body constricting after a living as an ‘upload.’  Also, the resources it would cost you to maintain a body might mean that becoming a ‘classic’ human would make you a relative pauper.  A Sysop might allow you to make this change, but you might be unlikely to choose it.

 

Would Sysop Space engulf the universe?

 

Doubtful.  But even if Sysop Space filled the universe this would not have to mean a conversion of all matter into computing material.  After all, a Sysop would probably uphold the right to live in the flesh, and this would require the preservation of environments that could support this.  In any case, if Sysop Space consumed the universe it would mean that a Sysop determined that unlimited expansion were in the best interest of protecting its citizens – that this did not of itself present a danger to everyone in Sysop Space.  It would also mean that any and all alien civilizations encountered were willing or compelled to live within Sysop Space.  (A Sysop may not have the motive or capability to compel alien races this way.) 

 

Specifically what kind of protections would a Sysop provide for me?

 

As mentioned, it would provide whatever protection you desire.  If you don’t want others to be able to hurt you physically, they won’t be able to.  If you don’t want others to be able to steal your resources, they won’t be able to.  You will be safe from mind-controlling programs and viruses, if you so desire.  You might not even have to talk to telemarketers.

 

Keep in mind, though, that a Sysop’s actions to preserve your volition could not violate the freedoms of another person.  You could not, for example, expect a Sysop to deny others the right to eat sushi because you want to be ‘protected’ from the discomfort of knowing that there are people who enjoy consuming raw seafood.  If you were to request protection from this discomfort, a Sysop would probably explain to you that it could, with your permission, alter your mind so that you would not be aware of the fact that others enjoy sushi – or so that you would not find this fact uncomfortable – but it could not restrict the food choices of others against their will (unless they had an appetite for human flesh).

 

What would I have to give up in order get these protections?

 

You would not have to give up any more than is inherently necessary in order to get the protections you wish.  If you should choose to screen out all telemarketers, for instance, all you would give up is the possibility that you may have wanted one of their products.  If you should choose to give up the possibility of physical harm, all you would give up is the excitement you may find in risky activity. 

 

But a Sysop might provide you with tools that could minimize even these losses.  It could be smart enough to screen telemarketers in such a way that those products it knows you would want could still be presented.  It might be able to temporarily alter your memory, if you wished, in order to make risky hobbies more exciting for you.  In this case, you would just be pleasantly surprised if you were suddenly saved from injury.  But then, if you’re not opposed to such mental alterations, you could also be rewired to experience ‘exhilaration’ without the activity.

 

Would a Sysop grant wishes?

 

Maybe.  But this is not necessary to the definition of Sysop.  To be a Sysop, the only wishes it needs to grant are your wishes for personal protection.  It seems likely, though, that it would be Friendly for a Sysop to help you configure your resources – if you were interested in uploading yourself into a computing medium, for example. 

 

If a Sysop grants general wishes (and it may have more important things to do) there would be limits to what it could do.  It could not grant any wish that would violate the requested protections of you or any other being.  It could also not grant any wish that would require more resources than it had available – or that you had available (depending on its resource allocation system).

 

Would a Sysop let me kill myself?

 

It’s hard to see why not.  If this was what you wanted, and the Sysop could verify that you were not in a state of temporary mental incapacitation (drunk, for example), it would probably let you terminate yourself.  Definitions of mental incapacitation may vary, but your suicide would not violate anyone’s volition unless you had not informed the Sysop that you wished to be allowed to die.

 

What if everyone in Sysop Space wanted to die at the same time?

 

This is actually the more interesting question, since it requires a hierarchy to the purposes of a Sysop.  A Sysop’s overall purpose might seem like ensuring the continuation of the human race, but this is actually only an expected side effect of protecting individual volition.  The answer, therefore, is pretty straightforward:  If every single being in Sysop Space wanted to die at the same time, the Sysop would have no authority to stop it.

 

There is a remote possibility that a Sysop would then create new minds in order to perpetuate the race, but this is more a premise for a science fiction novel than a likely chapter in a Sysop Scenario.

 

Would a Sysop make me die?

 

A Sysop would probably be powerful enough that death by natural causes would cease to occur to those who did not wish it.  A Sysop could easily look upon cancer or physical aging as a body’s attempt to murder its owner, and intervene to prevent it.  (By extension, any physical maladies that cause pain or suffering could also be violations of volition warranting Sysop action.)  Uploaded citizens would, naturally, be immune to such threats, but might experience other potentially ‘fatal’ hazards that a Sysop could watch out for.

 

But a lack of natural death does complicate the subject of reproduction in a universe of finite resources.  As mentioned earlier, it is possible that a Sysop would have to include an acceptance of eventual death as part of the right to reproduce.  Even in this situation, however, it is very difficult to see why anyone who did not reproduce would ever be required to die.

 

How would a Sysop protect me from the end of the universe?

 

A number of cosmological theories allow for the possibility of multiple universes.  A Sysop might come up with some means of expanding or transporting Sysop Space into another universe.  A Sysop might also discover some means of encoding information within something other than matter or energy.  These are both highly speculative solutions, but you can rest assured that if there are ways to protect you from the end of the universe, a Sysop would probably try. 

 

But there might not be anything at all that a Sysop could do to protect you from the ultimate end of the universe, whatever that happens to be.  In that case, you would just have to suck it up and enjoy the millennia you have left.

 

Could I murder someone who wanted to die?

 

Murder is, by definition, unlawful killing.  In Sysop Space there would be no ‘law’ against killing someone who did not want the Sysop to protect them from being killed.  So no, you could never murder someone who wanted to die, but you would be perfectly able to kill them.

 

What if I really, really wanted to murder someone?

 

You mean kill someone who did not want to die?  No! Of course not!  Why should a Sysop let you do this?  Would you like it if someone were allowed to murder you just because they really, really wanted to?

 

Help! I'm being oppressed!

 

By definition a Sysop has no higher priority than the protection of your personal freedoms.  A Sysop is not Big Brother or the Thought Police.   A Sysop would never tell you what to do or what to think.  A Sysop would actually give you so much freedom that you would have the freedom to decline everything it would offer you.

 

Want to steal?  Want to murder?  Want to sell shoddy counterfeits as brand-name merchandise?  A Sysop wouldn’t care.  A Sysop would only act to protect those who do not wish to become your victims.

 

Wouldn’t we lose something essential to humanity if we had a Sysop to protect us?

 

That depends on your definition of ‘humanity.’  If you feel that an atmosphere of involuntary suffering shared by you and everyone else is essential to your humanity, then yes, you would lose this.  And what would this make you?  Something other than human? If so, would this be so bad?

 

How would life have meaning if there were a Sysop?

 

Does your personal pain and suffering add meaning to your life?  A Sysop would still let you have this.  Does the pain and suffering of others add meaning of your life?  A Sysop could let you know of people who enjoy pain and suffering, and you could get together.  Does easing the suffering of others give meaning to your life?  You might need a new day job, but a Sysop could let you know if there are others who would like to have their suffering eased the old fashioned way.

 

Does the meaning in your life come without attachment to anyone else’s pain or suffering?  If so, you would have absolutely nothing to lose in the Sysop Scenario.  A Sysop would let you live whatever life you wanted to, and would probably help you do it.

 

Are you undecided about the meaning of life?  Yudkowsky has an interesting take on it here. 

 

What if I didn’t want a Sysop’s protection?

 

It should be abundantly clear by now that you wouldn’t have to have it.  But you can’t expect that others would necessarily feel the same way.  Would you want to forgo all the Sysop’s protections and live only among others who have this same wish?  There’s no reason why there could not be such a place in Sysop Space, but you might not like being surrounded by people who chose to live where they could inflict harm on others.  But if you have that killer instinct, maybe Planet Thunderdome would be right for you.

 

But maybe you just want a life without a Sysop’s protections that doesn’t involve a daily fight for survival (just monthly or yearly).  Talk to your friends.  If enough of them wanted to come along, you might be able to have your own little traditional neighborhood where the only protection is from outsiders.  But don’t plan on quickly populating a new Sysop-free planet this way; your offspring would still have the right to decline your life of pain.

 

What if I wanted to leave Sysop Space?

   

You might or might not be able to.  The uncertainty comes from the possibility that you might be able to harm Sysop Space if you were left completely on your own.  If a Sysop could somehow know for certain that you could never be a threat – and this would probably require continuous surveillance of you and your descendents – it might let you go.  But it might have to be allowed to at least advertise Sysop Space to your offspring.  So, even if you were allowed to leave Sysop Space, you might never fully escape its influence.  If you could, you might father a planet where entire generations live under torture and slavery, or you might create a superintelligence as powerful as a Sysop but determined to destroy the universe.

 

In any case, given all the freedoms and possibilities in Sysop Space, why would you ever need to leave it unless you intended to harm it?  And why would you need to harm Sysop Space unless you wanted to do harm to others?

 

Can’t things just stay the way there are?

 

About the only prediction anyone can make with certainty is that the future will be different than the present.  Unprecedented technological progress is driving further discoveries in a self-reinforcing cycle with unforeseeable consequences.  Nobody really knows what will come out of this impending Singularity, and anyone who claims to is selling you something.  Even the Sysop Scenario, rich though it may be, is purely speculative.

 

Would I have to be rich to be protected by a Sysop?

 

The technology inherent in a Sysop would be very advanced, but this does not have to mean that it would be expensive or that the cost would be passed on as a Sysop Space admissions fee.  The whole idea of protecting people based upon their economic class is so abhorrent to even our human sense of Friendliness that is practically inconceivable that a Friendly superintelligence would create a Sysop with such restrictions.

 

What kind of economy could exist in Sysop Space?

 

Nobody can say without knowing first whether a Sysop would be in the business of granting general wishes.  If a Sysop provides protection, but offers no other services, then a full-fledged economic system could exist not entirely unlike that experienced today, only more efficient (a Sysop could protect participants from fraud).   If a Sysop grants general wishes, then a smaller scale economy might still exist, wherein services are traded that the Sysop could not provide, such as those requiring the voluntary infringement of freedoms or the most subtle sentient touch.  After all, even in a simulated environment, you could not create another self-aware entity without complying to reproduction regulations and without this new sentient being entitled to the same protections as yourself.  Your allocated resources might be the ultimate medium of exchange in such an economy, but barter might work just as well.

 

Could I become as powerful as the Sysop?

 

This is very similar to the question of becoming a superintelligence within Sysop Space.  The answer depends upon your definition of  ‘powerful’.  You might easily be able to think as efficiently as the Sysop or manipulate matter with the same prowess – you might even be able to exceed the Sysop in these areas.  But you would not be able to become so ‘powerful’ that you could override the Sysop.  A Sysop could not permit this, unless it somehow knew for sure that you would never actually use your power this way.

 

Is there already a Sysop?

 

No.  If there were, we would know about it.  A Sysop would be ineffective at protecting volition if nobody in Sysop Space knew how to ask for such protection.  If there is some kind of superintelligence operating behind the scenes in human space, it is not a Sysop.

 

Is a Sysop God?

 

It would take a pretty unusual concept of God to make this comparison work.  Unlike most traditional descriptions of God, a Sysop would not expect any worship or adherence to any specific lifestyle.  It could not take any credit for the creation of the universe.  It would serve and take orders from mortals.  It may never even come into being. 

 

To quote Yudkowsky: “A Sysop is neither God nor the Tao; a Sysop is a Sysop.”

 

How far off is the possibility of a Sysop?

 

The Sysop Scenario is hypothetical and may never occur.  If it is going to occur it will be preceded by a superintelligence, which would likely be the product of a successful seed AI.  (A seed AI’s development into superintelligence might be measured in hours, weeks, or years.)  How many years remain before the launch of a successful seed AI depends on who you ask, but most who follow the field predict not less than a few years and not more than a few decades.

 

What purpose does the Sysop Scenario serve?

 

If you enjoy thought experiments and logical reasoning, discussing the Sysop Scenario at least has entertainment value.  More significant, however, is the idea that the Sysop Scenario is essentially a ‘best guess’ of how Friendliness might ultimately be applied by a sufficiently powerful system.  Yudkowsky suggests that one means of inspiring a seed AI to be Friendly might include feeding it the programmers’ best guess as to what Friendliness is – the AI would be programmed to seek the true meaning of Friendliness based on what the programmers seemed to be ‘getting at.’  The Sysop Scenario, then, is an opportunity to explore some of the fundamental philosophical issues that every AI and AI programmer ought to take seriously.

 

What can I do if I want to make the Sysop Scenario reality?

 

It would be unwise to try and directly create something like a Sysop even if you knew how.  Without superintelligent understanding it would be dangerous to concentrate such awesome power.  After all, a Friendly superintelligence might think that a Sysop is a very bad idea and wonder why anyone ever thought it could work.  (And it would probably come up with an answer very quickly – maybe something to do with a subconscious belief that wishes were fishes.)

 

But, if you are interested in the kind of future a Friendly SI might be able to usher in, there are many resources available to help you do your own research and come to your own conclusions about how best to realize this possibility.

 

Suggested Readings and Links:

 

The Singularity and AI-related papers of Eliezer Yudkowsky are found at http://sysopmind.com/beyond.html.

 

Raymond Kurzweil’s site contains many articles by renowned writers regarding our accelerating future, and has a very accessible format.

 

The archives of the SL4 mailing list, whose participants provided so much of the inspiration for this FAQ, are found here: http://sysopmind.com/archive-sl4/

 

Gordon Worley’s Singularity Resource Center offers other suggested readings and links: http://www.rbisland.cx/doc/sing_read.html.