From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Tue Jan 30 2001 - 09:48:15 MST
Samantha Atkins wrote:
> would ve care? Would ve care that some group was attempting to build
> another SI of equal power? Would ve care if humans would not accept any
> of its advice and insisted on being left severely alone? I am not
> sure. So I am asking. Some things you have said lead me to believe
> that there are definitely things the SI would be quite opposed to, even
> violently so. Would you clarify what those are?
Advice - freely offered, freely rejected.
Build another SI of equal intelligence - sure, as long as you build ver
inside the Sysop.
Build an Ultraweapon of Megadeath and Destruction so you can see how it
works - sure, as long as there's a bit of Sysop somewhere inside the
trigger making sure you don't point it at the Amish communities on Old
Build an Ultraweapon that you can aim anywhere, with no Sysopmatter
(visible or not) anywhere near it - you might still be able to get away
with this, as long as the Sysop can predict the future with total
certainty and predict that you'll never abuse the Ultraweapon, regardless
of any external influences you encounter. Probably no human, even Gandhi,
is subject to this prediction, but an uploaded Gandhi turned transhuman
Build an Ultraweapon for the purpose of abusing it - sorry, API error.
Under absolutely none of these circumstances does the Sysop need to strike
back at you. Ve just gives you an API error.
> > It's the Sysop's superintelligent decision as to whether letting someone
> > Outside would pose an unacceptable risk to innocent sentients. My
> > personal guess is that it does pose an unacceptable risk. If something
> > doesn't pose an unacceptable risk to innocent sentients, you should be
> > able to do it through the Sysop API. That's practically what Friendliness
> What if onery sentiences simply do not want to have to pass all
> decisions through this Sysop, no matter how intelligent and benign it
> may be? This is not an unexpected situation. What will the Sysop do in
> those cases? What if some group of sentients decided that what the
> Sysop considered an unacceptable risk was perfectly acceptable to them?
> Why would the Sysop want to forbid all entities that disagreed from
> going somewhere outside its territory? Can't stand the possibility of
> competition or that something might not be under its metaphorical
For all I know, it's entirely okay to fork off and run under your own
Sysop as long as that Sysop is also Friendly. (People who chime in about
how this would dump us into a Darwinian regime may take this as an
argument against Sysop splitting.) The static uploads may even form their
own polises with different operating systems and rules, with the
underlying Sysop merely acting to ensure that no citizen can be trapped
inside a polis.
> How is it good for humans, being just the type of onery
> independent creatures that we are, to have a benign Sysop rule over us?
This brings up a point I keep on trying to make, which is that the Sysop
is not a ruler; the Sysop is an operating system. The Sysop may not even
have a public personality as such; our compounded "wishes about wishes"
may form an independent operating system and API that differs from citizen
to citizen, ranging from genie interfaces with a personality, to an Eganic
"exoself", to transhumans that simply dispense with the appearance of an
interface and integrate their abilities into themselves, like motor
functions. The fact that there's a Sysop underneath it all changes
nothing; it just means that your interface (a) can exhibit arbitrarily
high levels of intelligence and (b) will return some kind of error if you
try to harm another citizen.
Things might be different in the transhuman spaces - I can guess for
static uploads only. And the above scenario may not be true for everyone,
but it is certainly much more likely to be true for people who resent the
"rule" of the Sysop.
> It strongly goes against the grain of the species. How will the Sysop
> deal with the likely mass revolt? What will "Friendliness" dictate?
> Simply wait it out as it holds all the cards?
Yep. Again, for static uploads, the Sysop won't *necessarily* be a
dominant feature of reality, or even a noticeable one. For sysophobic
statics, the complexity of the future would be embedded entirely in social
interactions and so on.
> Will the Sysop be sure
> this is actually being "Friendly" to the type of creatures we are?
If it's what we say we want.
> Are you sure?
Of course not. You could be right and I could be wrong, in which case -
if I've built well - the Sysop will do something else, or the seed AI will
do something other than become Sysop.
> > *is*. If you want to play tourist in Betelguese, wrap a chunk of Sysop
> > around yourself and take off. You won't be able to torture the primitives
> > when you get there, but you'll be able to do anything else.
> What if I simply want an extended vacation from Sysop controlled space?
> From what you have said, if I decide I want to extend that permanently
> the Sysop will say no. Interesting. Do you honestly think humanity
> will put up with this? Do you honestly think it is ok to effectively
> force them to by leaving no alternative?
Yes. I think that, if the annoyance resulting from pervasive forbiddance
is a necessary subgoal of ruling out the space of possibilities in which
citizenship rights are violated, then it's an acceptable tradeoff.
Please note that in your scenario, people are not all free free free as a
bird. In your scenario, you can take an extended vacation from Sysop
space, manufacture a million helpless sentients, and then refuse to let
*them* out of Samantha space. You can take actions that would make them
*desperate* to leave Samantha space and they still won't be able to go,
because the Sysop that would ensure those rights has gone away to give you
a little personal space. I daresay that in terms of the total integral
over all sentients and their emotions, the Samantha scenario involves many
many more sentients feeling much more intense desire to escape control.
"It is never possible to completely eliminate anxiety, and so the
mere cognitive presence of anxiety is not an adequate rationale for taking
some even riskier action just to discharge the anxiety, just to be 'doing
something about it'. I think this is a force often underestimated in
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT