Re: On the dangers of AI

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Wed Aug 17 2005 - 07:30:30 MDT


On Wed, 2005-08-17 at 01:59 -0400, Richard Loosemore wrote:
> At this point in our argument, we (SL4 folks) must be very careful not
> to make the mistake of patronizing this hypothetical creature, or
> engaging in the kind of reverse-anthropomorphizing in which we assume
> that it is stupider than it really is ..... this is *not* a creature
> asking itself "what feels good to me?", it is a creature that has
> already jumped up a level from that question and is asking itself
> "what,
> among the infinite possibilities, are the kind of experiences that I
> would like to *become* pleasurable?

It looks to me like your AI lacks external reference semantics. For an
introduction, read:

http://www.intelligence.org/CFAI/design/structure/external.html

A well-designed RPOP does not try to maximize the satisfaction of its
goal system object; it tries to achieve certain world-states, and views
the goal system object as a tool used to measure the value of these
world-states. When you want to buy ten feet of cloth, shrinking your
ruler does not make it easier!

In the absence of external reference semantics, why wouldn't your AI
simply wirehead its goal system?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT