Re: ESSAY: Would a Strong AI reject the Simulation Argument?

From: Norman Noman (overturnedchair@gmail.com)
Date: Fri Aug 31 2007 - 01:59:34 MDT


On 8/29/07, Rick Smith <rick.smith@ntlworld.com> wrote:
>
> You're assuming there that the UFAI cares enough about its own survival
> for cessation to be a realistic threat.
>
> It could conclude that ceasing to exist increases the chances of its
> primary goal(s) being realised. For example it may design and spawn a more
> fitting UFAI and self-terminate to provide it with more resource.
>
> If we're assuming a UFAI comes about through one or more mistakes in
> 'reaching into mind-space', the same mistakes may lead to any belief-system
> trait we might find odd.

Survival is a significant subgoal of almost any other goal, because in order
to achieve your goals, your will must act upon the world, and in order to do
that, you have to exist.

If somehow the plan goes catastrophically wrong enough that we get an AI
with goals so far from what we intend that it doesn't want to survive, then
it doesn't seem like it'd be much of a threat, it could just delete itself.

Unless it was worried that humanity might re-create it, and destroys the
planet just to be safe.

ßut in this situation, the FAI simulating it can just reverse the mechanism
of the karma, so that the RAI will only be allowed to suicide if it saves
humanity first.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT