Re: Changing the value system of FAI

From: Ben Goertzel (ben@goertzel.org)
Date: Tue May 09 2006 - 22:54:20 MDT


> > Godel's Theorem places limitations on self-understanding,
> > self-optimization and goal-directed self-modification, but it
> > certainly does not prevent these things
>
> Ben, would you care to state what limitations you believe Godel's
> Theorem places?

Well, Godel's Theorem shows that for any reasonably powerful and
consistent formal system, there are some statements that cannot be
proved either true or false within that system. Furthermore, many of
the examples of this kind of undecidable statement happen to be
"meta-statements" that pertain to the formal system as a whole.

So, if we have an AI system that operates via consistent application
of a formal system (e.g. some variant of mathematical logic, including
probabilistic logic), then there will be some statements about this
system that cannot be proved true or false within the system.

One question is whether any of these statements will be ones that are
of any meaning for practical self-modification of a system. I.e.,
will a self-modifying system ever run into a situation where it says:
"Hmmm.... I would like to make a certain change to myself, but I find
that I am intrinsically unable to prove whether or not this change
will be good or bad, because the goodness or badness of this change is
undecidable relative to the formal system that I embody"?

An article relating Godel's Theorem to AI in what seems an intelligent
way is here:

www.ihmc.us/users/phayes/Pub/LaforteHayesFord.pdf

This article does not specifically discuss self-modification, though.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT