Re: Opting out of the Sysop scenario?

From: Brian Atkins (brian@posthuman.com)
Date: Sat Aug 04 2001 - 20:34:14 MDT


One possibility I'd add is that Friendliness science, or whatever the
equivalent is post-Singularity, may advance to the point of being able
to prove whether or not you are Friendly and will stay Friendly. So it
might be possible to let the Sysop scan your mind and prove to it that
it can safely let you off on your own without having to worry about what
you might do. I don't see what the advantage of this would be for you
though- the only conceivable reason to want to live outside Sysop Space
would be to do something bad.

As for Sysop failure scenarios I don't see what you're worrying about.
If it turns out that implementing such a scenario would not scale well
then the initial FAI will probably not implement it. If it does create
a Sysop, it will be designed to handle natural disasters. There is the
remote chance it runs into a foreign unFriendly AI or something that
it can't beat, in that case it would probably advise all the Citizens
to run for their lives. There's nothing preventing them from spreading
all over the galaxy and further anyway, whether the Sysop is alive or
dead. If you want to go to Alpha Centauri, the Sysop can attach a
chunk of itself to your ship to go with you and start a fresh copy of
itself there in that solar system.

I have to recommend again to think of it like UNIX perms or something.
It is there for a good reason. It is rather mechanistic in its basic
actions, and does not "go wrong" out of the blue. It provides a basic
security feature without which you would be utterly screwed. It'd be
like running an old copy of Linux with all the security holes unpatched
just waiting for someone else to exploit you. In other words the
potential disaster you are worrying about (Sysop run amok, we're trapped
in it) doesn't go away if a Sysop doesn't currently exist. It can happen
any time any where. In an anarchist future with no Sysop, anyone is
free to create one with their local computronium. It seems quite
inevitable to me that there will be eventual control established over
most matter- the question is how does it play out? Do the more agressive
transhuman minds squabble and fight over it leading to a software/nanotech
arms race? Wouldn't the end result of that be one or a few people ending
up with most of the matter for themselves, using some kind of Sysop-like
(sans-Friendliness obviously) software to maintain control over it? That's
just one scenario, but it doesn't sound very nice or less risky than the
traditional Sysop idea.

The idea that we will all fly off in our space ships and live happily
ever after just doesn't work for me :-)

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT