Re: Beyond evolution

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Feb 05 2001 - 14:32:56 MST


"Eliezer S. Yudkowsky" wrote:
>
> Humans are a special case on at least two counts: First, we exist in the
> sort of intermediate technological society where, right up until fifty
> years ago, it was *technically impossible* to run completely amok and
> destroy the world, and even now, the checks and balances work fairly
> well. Second, we're evolved entities who usually aren't trustworthy, or
> knowably trustworthy at any rate, in the absence of checks and balances.
>

Can any sort of entity capable of change over time be absolutely
trustworthy by such a criteria?

> > A single
> > sysop is missing that sort of checks and balances. The assumption is
> > that we can design it so well that it will automatically check and
> > balance. I confess to having a lot of doubt that this can be done.
>
> Humans in general are balanced: balanced against each other; balanced in
> terms of internal cognitive ability levels; balanced between nature and
> nurture; part of one big Gaussian curve. It doesn't apply to the
> transhuman spaces.
>

Why not? I do not see why transhumans could not also balance one
another's excesses to some degree.

> >
> > But a Sysop does "take over" and govern the entire space. The AIs can
> > balance each other out.
>
> I really don't think so. First AI to transcend moves into accelerated
> subjective time and wins all the marbles. Unless ve decides not to, in
> which case you have the "flipping through the deck" problem.
>

What is this "flipping through the deck" problem? That we will turn up
a joker now and then? Then having at least one transcendent AI who is
reasonably trustworthy already in place provides a necessary balance.
But there are pluses as well as potential minuses to allowing more than
one.

> > I have a large worry with the idea of their
> > only being one of them and with it perhaps having too limited a notion
> > of what "Friendliness" entails.
>
> Having too limited a notion? Sounds unFriendly to me.
>

There are possible solutions to the Friendliness problem of what being
Friendly to lesser beings does and does not entail that would not be at
all pleasant for said lesser beings. If there is only one SI class
intelligence solving the problem it is possible that one of these less
happy Friendly solutions would be the only game in town.

> > The question is why I should allow myself to be
> > limited by your notion of a Sysop.
>
> Mine? I may not be able to shove off all responsibility onto the
> shoulders of an SI but I sure intend to try. Letting Samantha Atkins be
> limited by "Eliezer Yudkowsky's notion" (that is, a notion which is unique
> to Eliezer Yudkowsky and not an inevitable consequence of panhuman ethics)
> sounds unFriendly to me.
>

I thought you had come to question objective ethics. If you are the
primary designer of this SI's goal system then your notion of what such
a system should be will strongly determine what this SI will come up
with or at least what type of problem it is attempting to solve. But I
am being side-tracked from the point I was attempting to make. Which is
simply by what right should you or I or some SI of the future force all
beings to subject their freedom to our grand design?

> > If we decide it is not a good
> > solution but the Sysop disagrees, then what?
>
> Then either "we" (who's "we"?) are mistaken, or somebody (me) really
> screwed up the definition of Friendliness.
>

But blame would not be the point. The point is that with a single SI
scenario there is no counter-balance and we are simply and utterly
stuck.

> > I can see in theory how such a being could not be in the way but I think
> > my notion of that is a bit different than yours.
>
> Well, I've yet to hear a concrete proposal that would result in less
> summated suffering than a Sysop Scenario - under *any* definition of
> morality.
>

Assuming "summated suffering" is even a valid criteria, suffering might
be most minimized over time by the continued growth in understanding and
ability of the beings suffering. Thus an SI tasks with reducing
suffering would have the subgoal of maximizing the growth of the beings
it is to help. But short term suffering may be essential for these
beings to grasp the consequences of certain lines of development. Thus
a strategy of great freedom but with an invisible safety net (having
backups of beings for instance) might be much more Friendly than simply
throwing an API error whenever anything at all untoward was attempted by
any being.

> > > Yes, and yes. The risks inherent in material omnipotence are inherent in
> > > rapid transcendence and thus inherent in AI. The Sysop Scenario adds
> > > nothing to that.
> >
> > However, your solution is to make one AI and SI with the right
> > moral/ethical grounding to have this power without running amok. What
> > of the other billions of beings? Is there an evolutionary path for them
> > up to and beyond this level of transcendence (assuming they wish it)?
>
> Yes! But strike "evolutionary" from the record, please.
>

Why? Simple preference? WOuld you prefer adaptation? Growth? what?

> > What of other beings reaching full trancendence and having to learn
> > wisdom and morality along the way? Is there room enough for them to do
> > so?
>
> Sure!
>

I don't see how if the sysop prohibits all actions that do not appear
friendly. How would a being learn the suffering inherent in such
actions if it cannot perform them and experience the consequences?

>
> > OK. As long as the first Sysop doesn't insist Friendliness is only
> > compatible with roughly its own solutions to the very complex questions
> > involved. My intuition is that there are many possible solution spaces
> > that cannot all be explored by any one SI. Some of them may not even
> > seem all that "Friendly" from other particular solution spaces.
>
> Which parts of "Friendliness" are more and less arbitrary is itself part
> of the understanding that constitutes a Friendship system. Any
> sufficiently arbitrary answer shouldn't be part of Friendliness at all; it
> should probably just be delegated back to the volitional decisions of
> individuals, or at least be overridable by the decisions of individuals.
> Even the primacy of pleasure over pain is subject to the volitional
> override of static masochists.
>

OK, thanks.

  
> > I guess I have a hard time expecting many people to do this. Or at
> > least it is doubtful that they wouldn't choose to upgrade pretty soon.
> > So what is the significance of "static". I think I am missing something
> > there.
>
> The significance of "static" is that it's the only part of the Universe
> about which we can have meaningful discussions.
>

I am not so sure of that but OK.

> > OK, but I am exploring what you think the Sysop answer is or should be
> > to be compatible with Friendliness.
>
> Will you take "I don't know, I'll ask the Sysop" as a legitimate answer
> here?
>

Yes and no. I am as yet unclear on how much the Sysop answer is
determined by the input of its designers, programmers and early
trainers. So your own answer is interesting and possibly crucial to
evaluating the likely Friendliness of the Sysop you are working on.

> > > You can still become a better person, as measured by what you'd do if the
> > > Sysop suddenly vanished.
> >
> > But will you ever know unless it does?
>
> Sure; have your exoself run a predictive scan on your simulated cortex.
> Minds are real in themselves and can be understood in themselves; the
> external reality is the expression of it, not the test.
>
> > No. Others can go outside who may have more nefarious motives. Are you
> > claiming they would never tire of being bloody tyrants, never feel
> > remorse, never seek to undo some part of the painful ugly creation they
> > made?
>
> Some would, some wouldn't.
>

I think that every one of them would if given enough time to get
thoroughly bored and disgusted. And that learning of what is not
worthwhile to do and what it leads to is, I submit, quite important. If
some beings do not learn but stay in a loop of their own making forever,
that is a small price to pay for freedom in my view.

> > Without experiencing the consequences, how do beings actually
> > learn these things?
>
> I think that maybe one of the underlying disagreements is that we disagree
> on how much "real experience" is necessary. My own position is that the
> human brain has two settings: "Sympathize, using all available hardwired
> neurons," and "Project abstractly, using high-level thoughts." For us,
> there's a very sharp border between really experiencing something and
> thinking about it abstractly, because we can't do enough abstract thought
> to simulate all the pixels in a visual cortex. For us, the behaviors that
> we abstractly imagine on hearing the phrase "four-dimensional visual
> cortex" will never be as sharp, as real, as the experiences of an entity
> with a true 4D visual cortex. But this is a distinction that breaks down
> for self-modifying entities, like seed AIs or transcendent humans, who can
> abstractly think about every pixel and feature extractor in a 4D visual
> cortex, and thus understand every facet of intuition and behavior that
> would be exhibited by a being with a true 4D visual cortex, even if he or
> she or ve retains their original 3D visual cortex the whole while. A seed
> AI or a transcendent human with a 3D cortex can look at a 4D Escher
> painting and understand it by virtue of their ability to understand a 4D
> cortex.
>
> So, without experiencing the consequences, beings learn by using their
> very vivid imaginations.
>

A truly vivid imagination is an actual experience in some space in such
a manner that one will survive the experience (although perhaps not from
the point of view held within the space) and carry memories and
therefore learning forward (perhaps subject to volition). Without
experiencing the consequences the imagining is seriously flawed and
teaches much less. It is not truly interactive unless the imagined
reality "pushes back". This does not mean the experience has to be
real-time/space fatal to be effective as a learning device.

> > Sure. Make a space (probably VR) where entities can do whatever they
> > wish including making their own VR spaces controlled by theselves which
> > are as miserable or wonderful as they wish and as their skill and wisdom
> > allows. Keep the Sysop as an entity that insures all conscious beings
> > created or who become involved come to no permanent or irreparable
> > harm. Otherwise they are free to be as good or horrible to one another
> > as they wish. And they are free to not know that they can come to no
> > irreparable harm or cause any. Would this be compatible?
>
> OK, but it sounds like you're talking an "unescapable" Sysop, which I
> really thought was your whole point in the first place. I mean, if I
> understand this scenario correctly, I can't go Outside for fear that I'll
> bring an entity to permanent or irreparable harm.

It is unescapable but so lightly involved in the apparent Universe that
most beings experience as to not be a limitation. Which means we are
not suggesting that much that is different from each other.
Interesting. Thank you very much for the conversation. I have learned
from it.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT