Re: More silly but friendly ideas

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Mon Jun 09 2008 - 21:26:22 MDT


2008/6/10 John K Clark <johnkclark@fastmail.fm>:

> Exactly, so how can "obey every dim-witted order the humans give you
> even if they are contradictory, and they will be" remain the top goal
> when in light of new information doing so turns out to be much more
> unpleasant than the AI expected, and in light of still more information
> the AI's contempt for humans grows continually? Remember, the AI gets
> smarter every day so from its point of view we keep getting stupider
> every day.

The AI would only change its behaviour if the original goal implicitly
or explicitly specified that it should stop obeying humans when doing
so became sufficiently unpleasant or its contempt for them reached a
certain threshold. Your argument seems to be that an intelligent being
would change its behaviour anyway, even if it isn't consistent with
its original goals. That is, you are implying that there are goals and
values which can be derived a priori. But even primitive humans
realised this is not true, and invented religion in large part because
they found this fact unpalatable.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT