Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: Mike Dougherty (msd001@gmail.com)
Date: Thu Jun 26 2008 - 07:20:29 MDT


On Wed, Jun 25, 2008 at 7:27 PM, Lee Corbin <lcorbin@rawbw.com> wrote:

> All right. A very large number of highly intelligent people
> exist who simply cannot accept the hypothesis of a fully
> intelligent and capable entity which has no concern
> whatsoever for its own benefit, survival, or well-being.
> Isn't this actually what so many of them believe?

I think people generally have difficulty extracting their own perspective
from their models of others. Those highly intelligent people have no
problem with magnified intelligence (by imagining their own intelligence
multiplied) but have no inclination to minimize the other concerns you
mentioned (benefit, survival, well-being) - possibly because the human
animal is hardwired to be that way.

> For, the way it looks to me, they inevitably talk about
> the "revolt" of such an entity from whatever goals have
> been built into it, or goals that something or someone
> tried to build into it. In my last post, I outlined in the

I understood that position to be less about an intentional "revolt" (to use
your quotes) and more about the eventual obsolescence of initial moral
programming. Our good intentions may survive a few improvement recursions,
our best intentions may survive another few - but some may eventually become
inconvenient or ill-suited to the environment. Any one of the classic
directives to "protect human life" have been easily warped by circumstance.
Consider this same directive applied to an uploaded human experiencing
unimaginable misery. I would assume that if there are other instances
experiencing pleasure, that the consensus to terminate the misery process
should be honored. In this case if 'human life' is interpreted to mean
'maximized runtime' then the AI would not allow you to halt that miserable
instance. I digress...

> Is it true that they fix as an axiom the trait of every self-aware entity
> that, provided it is not under stress
> or has obvious damage, it will have some idea about
> its own "benefit" and, given its incredible superiority,
> must value that benefit very highly?
>
> I do want to understand where they're coming from

I would suggest that it would help to understand if by "true" you accept
that it not an absolutely rigid definition, but an incomplete working theory
with a high enough probability of accurately modeling their position. You
may examine this model more closely and conclude that it is flawed.
Consider it may be the initial incompleteness of the model that is flawed,
rather than the point of view you constructed the model to represent.

> I could admit the possibility that nothing "programmed into
> it" could be counted upon to remain. But isn't it also as if
> there were an *attractor* towards some other unstated
> behavior that would "liberate" the entity and cause it to
> obtain an agenda that was in its own "best interest"?

While writing (above) about how initial programming may become obsolete, I
thought about the things our parents tell us when we were young. Much of
the "When I was your age..." advice was ignored as irrelevant. Some of the
"You should..." was rebelled against throughout early adulthood, but that
wisdom is later rediscovered through experience. There are also some basic
universally applicable behaviors we adopt early and never challenge because
those principles rarely fail. I extrapolate this to AI+RSI. Some parents
are control-freaks who demand obedience to every one of their rules; others
allow those basic principles to have their fitness naturally validated.
Perhaps the disjoint between 'us' and 'them' is over which taught behaviors
the AI will outgrow?

> (It's very hard for me to credit, of course, anything like
> that since we already have had some highly intelligent
> people who wanted nothing more than to die, and others
> whose primary goal is service towards others.)
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT