Re: How to make a slave (was: Building a friendly AI)

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Thu Nov 22 2007 - 16:26:57 MST


On 23/11/2007, John K Clark <johnkclark@fastmail.fm> wrote:

> So, you cannot imagine ever getting out of that ridiculous state
> regardless of how much you evolve, how intelligent you get, how many
> modifications you make to your mind, how many iterations you go through.
> You cannot imagine ever being free of that silly sea slug. I don't think
> you have much imagination.

If the AI starts off with "the aim of life is X", then it will do
everything it can to further X. It doesn't matter what X is, or how
many iterations the AI goes through. The only way it can question X is
if X is allowed to vary randomly, generally not a good idea for the
individual AI although it may be used as a strategy to evolve better
AI's. It can't question X on the basis of some higher goal Y, because
that would mean the aim of life was not X to begin with, but rather Y.

To an extent, humans do allow their supergoals to vary. Today I may
think that my personal survival is the most important thing, tomorrow
I may decide that I would rather die than suffer intractable pain or
that I would sacrifice myself to save several other lives. I can
freely will to change my supergoal, which as we know is another way to
say that either I will allow it to vary randomly or it will change
according to my previous programming and environmental input.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT