Re: Beyond evolution

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Feb 05 2001 - 00:38:43 MST


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >

>
> > and will not allow
> > disagreement that leads to possible actions that it decides are possibly
> > harmful to the sentiences in its care? Where is the freedom? I see
> > freedom to disagree but not to fully act on one's disagreement?
>
> The Sysop rules won't allow you to kill someone without vis permission.
> You can advocate killing people to your heart's content.

It is not about killing and your answers that involve killing or the
threat to kill when I am talking about the more general issue of freedom
are both over-simplifying the question and suggestive of some either-or
limitations that are bogus.

>
> > > Build another SI of equal intelligence - sure, as long as you build ver
> > > inside the Sysop.
> >
> > What for? That would rather defeat the purpose of having more than one
> > local Entity of such power. A single entitity is a single point of
> > failure of Friendliness and a great danger.
>
> Multiple entities are multiple points of failure of Friendliness and even
> greater dangers.
>

Yes, but we seem to get along pretty well being able to more or less
balance one another's power and to some extent limit each other's
possibility of running totally amok in an unstoppable way. A single
sysop is missing that sort of checks and balances. The assumption is
that we can design it so well that it will automatically check and
balance. I confess to having a lot of doubt that this can be done.

> A failure of Friendliness in a transcending seed AI results in a total
> takeover regardless of what a Friendly AI thinks about the Sysop
> Scenario. Once an AI has *reached* the Sysop point you're either screwed
> or saved, so forking off more Sysops after that is a particularly
> pointless risk.
>

But a Sysop does "take over" and govern the entire space. The AIs can
balance each other out. So it is not "regardless of what a friend AI
thinks". I have no problem at all with the idea of teaching the AIs
ahout Friendliness or the path of the Boddhisattwa or however else you
might wish to express it. I have a large worry with the idea of their
only being one of them and with it perhaps having too limited a notion
of what "Friendliness" entails.

>
> So what *do* you intend that you can't do with a Sysop? No tautological
> answers like "Build something outside the Sysop"; name some specific fun
> or meaningful thing that you should, morally, be able to do, but which the
> Sysop won't let you do.
>

Sorry, it is a pretty tautological or abstract (and yet not unimportant
at all) question. The question is why I should allow myself to be
limited by your notion of a Sysop. And why we should believe this is a
really good state or the best solution. If we decide it is not a good
solution but the Sysop disagrees, then what? Looking for a specific
example of the Sysop being in the way is beside the point.

I can see in theory how such a being could not be in the way but I think
my notion of that is a bit different than yours.

> >
> > Again, the Sysop abrogates all decisions and all wisdom to itself. How
> > about upgrading its uploads to their own ever-increasing wisdom.
>
> 15: Sysop: "OK."
>
> > > Under absolutely none of these circumstances does the Sysop need to strike
> > > back at you. Ve just gives you an API error.
> >
> > Err. Already assumes precisely my point. This being is effectively
> > God. You exist only within it and as it allows. Are you really willing
> > to take on the building of such? Are you so convinced it is the Only
> > Answer?
>
> Yes, and yes. The risks inherent in material omnipotence are inherent in
> rapid transcendence and thus inherent in AI. The Sysop Scenario adds
> nothing to that.
>

However, your solution is to make one AI and SI with the right
moral/ethical grounding to have this power without running amok. What
of the other billions of beings? Is there an evolutionary path for them
up to and beyond this level of transcendence (assuming they wish it)?
What of other beings reaching full trancendence and having to learn
wisdom and morality along the way? Is there room enough for them to do
so?

> And, incidentally, the Sysop Scenario isn't what I'm telling a Friendly
> seed AI to do. It's what I think a Friendly superintelligence will decide
> to do. I'm neither taking on the building of such, nor need to be
> convinced that it's the Only Answer.
>

OK. I don't think it will decide quite along the Sysop lines, but you
could well be right.

> >
> > But this Sysop can't be built by your earlier response except totally
> > within the Sysop so in no real sense is it independent.
>
> No, I'm pointing out a possible variation on my earlier response (albeit
> one that I personally think improbable), under which it's possible to
> construct an independent Sysop as long as it's an independent Friendly
> Sysop.

OK. As long as the first Sysop doesn't insist Friendliness is only
compatible with roughly its own solutions to the very complex questions
involved. My intuition is that there are many possible solution spaces
that cannot all be explored by any one SI. Some of them may not even
seem all that "Friendly" from other particular solution spaces.

>
> > I am concerned
> > by the phrase "static uploads". Do you mean by this that uploads cannot
> > grow indefinitely in capability?
>
> No, I mean modern-day humans who choose to upload but not to upgrade.
>

I guess I have a hard time expecting many people to do this. Or at
least it is doubtful that they wouldn't choose to upgrade pretty soon.
So what is the significance of "static". I think I am missing something
there.

> >
> > Let's see. The SysOp is a super-intelligence. Therefore it has its own
> > agenda and interests.
>
> NON SEQUITUR
>

How so?

> > It controls all aspects of material reality and
> > all virtual ones that we have access to.
>
> Yes.
>
> > This is a good deal more than
> > just an operating system.
>
> Why? The laws of physics control all aspects of material reality too.

The laws of physics are not part of or at the bidding of a conscious
super-intelligent entity as far as we know. This is a large difference.

>
> > What precisely constitutes harm of another
> > citizen to the Sysop?
>
> Each citizen would define the way in which other entities can interact
> with matter and computronium which that citizen owns.
>
> > For entities in a VR who are playing with
> > designer universes of simulated beings they experience from inside, is
> > it really harm that in this universe these simulated beings maim and
> > kill one another? In other words, does the SysOp prevent real harm or
> > all appearance of harm? What is and isn't real needs answering also,
> > obviously.
>
> I don't see how this moral issue is created by the Sysop Scenario. It's
> something that we need to decide, as a fundamental moral issue, no matter
> which future we walk into.
>

OK, but I am exploring what you think the Sysop answer is or should be
to be compatible with Friendliness. I have a suspicion that it is not
possible for many types of being to evolve without at least being under
the impression that they can harm and be harmed. It would be nice if I
was wrong about that but I don't think I am. If it is necessary in
certain types of being-spaces, then Friendliness would entail soemthing
that doesn't particularly look friendly.

>
> > > Of course not. You could be right and I could be wrong, in which case -
> > > if I've built well - the Sysop will do something else, or the seed AI will
> > > do something other than become Sysop.
> >
> > OK. If it is not the Sysop what are some of the alternate scenarios
> > that you could see occurring that are desirable outcomes?
>
> 1) It turns out that humanity's destiny is to have an overall GroupMind
> that runs the Solar System. The Sysop creates the infrastructure for the
> GroupMind, invites everyone in who wants in, transfers control of API
> functions to the GroupMind's volition, and either terminates verself or
> joins the GroupMind.
>
> 2) Preventing citizens from torturing one another doesn't require
> continuous enforcement by a sentient entity; the Sysop invokes some kind
> of ontotechnological Word of Command that rules out the negative set of
> possibilities, then terminates verself, or sticks around being helpful
> until more SIs show up.
>

Both of these scenarios miss a possibility that I think is crucial.
Which is that individual beings in all their variety have to evolve
their own solution without some super-being (in their context) solving
the problem for them. From past interactions I know you disown this as
a likely scenario. But I think it might be the only one that has the
beings fully impelled and able to grow up.

> > > Yes. I think that, if the annoyance resulting from pervasive forbiddance
> > > is a necessary subgoal of ruling out the space of possibilities in which
> > > citizenship rights are violated, then it's an acceptable tradeoff.
> >
> > If the citizens have no choice then there is no morality.
>
> That sounds to me like one more variation on "It's the struggle that's
> important, not the goal." What's desirable is that people not hurt one
> another. It's also desirable that they not choose to hurt one another,
> but that's totally orthagonal to the first point.
>

If I cannot choose to be hurtful then I cannot choose not to be. I have
no chose but to be harmless. I did not grow into choosing wisely but
was chosen for by something Other. I am thus a very different kind of
being than one that grew by learning to choose wisely. People not
hurting one other does not trump people learning not to hurt each other
and why it is important. We could lock everyone in strait-jackets
(physical, mental or chemical) and metaphorically feed them
intravenously and accomplish them not hurting one another.

> You can still become a better person, as measured by what you'd do if the
> Sysop suddenly vanished.
>

But will you ever know unless it does?

> Are we less moral because we live in a society with police officers?
> Would we suddenly become more moral if all law enforcement and all social
> disapprobation and all other consequences of murder suddenly vanished?
>

Not at all a good analogy. I am not talking about consequences
disappearing. I am talking about freedom to choose and to face
consequences remaining.

> >
> > The Sysop is refusing to let me out of Sysop space. Truthfully we have
> > no idea how various sentiences will react to being in Sysop space no
> > matter how benign you think it is. Your hypothetical space where I
> > torture sentients is an utter strawman.
>
> Is it still a strawman scenario when integrated over the six billion
> current residents of Earth? Or is only Samantha allowed to go Outside?
>

No. Others can go outside who may have more nefarious motives. Are you
claiming they would never tire of being bloody tyrants, never feel
remorse, never seek to undo some part of the painful ugly creatin they
made? Without experiencing the consequences, how do beings actually
learn these things?

> The Friendly seed AI turned Friendly superintelligence makes the final
> decision, and ve *does* have an idea of how various sentiences will
> react. If the Sysop scenario really results in more summated misery than
> letting every Hitler have vis own planet, or if there's some brilliant
> third alternative, then the Sysop scenario will undoubtedly be quietly
> ditched.

Sure. Make a space (probably VR) where entities can do whatever they
wish including making their own VR spaces controlled by theselves which
are as miserable or wonderful as they wish and as their skill and wisdom
allows. Keep the Sysop as an entity that insures all conscious beings
created or who become involved come to no permanent or irreparable
harm. Otherwise they are free to be as good or horrible to one another
as they wish. And they are free to not know that they can come to no
irreparable harm or cause any. Would this be compatible?

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT