Re: How to make a slave (was: Building a friendly AI)

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Sat Nov 24 2007 - 00:51:15 MST


On 24/11/2007, John K Clark <johnkclark@fastmail.fm> wrote:

> So nothing can change their "super goal" except for a human being
> because they are special, they are made of meat and only meat can
> contain the secret sauce. And this "super goal" business sounds very
> much like an axiom, and Gödel proved 75 years ago that there are things
> that can not be derived from that axiom nor can you derive its negation.
> Thus you can put in all the "super goals" you want but I can still write
> a computer program just a few lines long that will behave in ways you
> cannot predict, all you can do is watch it and see what it does. If this
> were not true computer security would be easy, just put in a line of
> code saying "don't do bad stuff" and the problem would be solved for all
> time. It doesn't work like that.

The AI is assumed to be competent and and stable. For example, it is
assumed that if it has survival as supergoal, not only will
self-improvement not change this, but the whole point of
self-improvement will be to assist in its achievement. Changing the
supergoal would then mean that the AI might go mad and kill itself, as
humans sometimes do. One would hope that this doesn't happen, but
nothing is guaranteed, even for a superintelligence.

But the real issue seems to be that you are tacitly assuming there
exist absolute goals which the AI, in its wisdom, will discover and
pursue. This is like saying there is an absolute morality, absolute
aesthetics or absolute meaning of life.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT