From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Dec 11 2000 - 07:13:10 MST
> This all gets complex. You'd have to read "Friendly AI" when it comes
I do intend to ;>
> But the first steps, I think, are: (1), allow for the presence of
> probabilistic reasoning about goal system content - probabilistic
> supergoals, not just probabilistic subgoals that are the consequence of
> certain supergoals plus probabilistic models.
This is a key point, and really gets at why your "I love mommy and daddy, so
clone them" example seems weird.
We try to teach our children to adopt our value systems. But our explicit
in this regard generally are LESS useful than our concrete examples.
goals and values from their parents in all kinds of obvious and subtle ways,
come from emotionally-charged interaction in a shared environment.
Choice of new goals is not a rational thing: rationality is a tool for
achieving goals, and
dividing goals into subgoals, but not for replacing one's goals with
Perhaps this is one of the key values of "emotion." It causes us to replace
with supergoals, by latching onto the goals of our parents and others around
>(2), make sure the very
> youngest AI capable of self-modification has that simple little reflex
> that leads it to rewrite itself on request, and then be ready to *grow*
> that reflex.
Rewriting itself on request is only useful if the system has a strong
understanding of HOW
to rewrite itself...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT