RE: Universe identity (was: Fighting UFAI)

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Fri Jul 15 2005 - 11:14:49 MDT


On Fri, 2005-07-15 at 11:25 -0400, pdugan wrote:
> An explicit association of self to universe would probably be useless, not to
> mention anthropomorphic. However, were the AI's supergoal to assure optimal
> growth, freedom, happiness (or whatever) for a set of entities to which an
> identification could be assumed as a functional metaphore, and that set
> continued to grow recursively as the AI's knowledge grew, then we'd have an

I'm having a hard time interpreting this. Do you mean that your AGI
should assure optimal growth, freedom, and happiness (or whatever) for
any being which is sufficiently similar to itself?

I'd be extremely wary of this kind of AGI. Your AGI would be able to
determine which configuration of matter optimizes growth, freedom, and
happiness, however you define them, and modify itself in such a way that
this configuration now lies within its reference class for sentient
beings. Then the universe gets tiled.

In fact I think that this is what's likely to happen however you define
your reference class for sentient beings. I don't think human FAI
programmers are smart enough to define sentience, so an optimally
growing, free, and happy being is not very likely to be a configuration
which humans would actually consider to be sentient or valuable.

IMO we need to back off from the idea of optimizing a utility function
if we don't want to optimize ourselves out of existence. Abstracting a
utility function from human morality is not a task for humans.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT