RE: When Subgoals Attack

From: Ben Goertzel (ben@intelligenesis.net)
Date: Wed Dec 13 2000 - 13:03:16 MST


Of course, what you describe happens all the time in the human mind.

Your goal is to have fun. So you start playing a video game. You get
hooked, but are doing badly,
and you get pissed off. The subgoal of playing the game has overthrown the
supergoal of having
fun. Having spawned the subgoal, you then become
thoroughly involved in it.... Now a transhuman AI should be able to avoid a
lot of this: being more flexible
it can just execute an internal program telling it "if this ever stops being
fun, you should probably
stop doing it, unless some goal higher up than 'having fun' says otherwise."

I don't think a transhuman AI will be able to completely overcome the
problem of "subgoal alienation", which is
endemic in human psychology, but I suppose that the more memory & processing
power you have, the more you can
avoid the problem...

ben

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Wednesday, December 13, 2000 2:23 PM
> To: sl4@sysopmind.com
> Subject: Re: When Subgoals Attack
>
>
> Durant Schoon wrote:
> >
> > Problem: A transhuman intelligence(*) will have a supergoal (or
> > supergoals) and might very likely find it practical to
> > issue sophisticated processes which solve subgoals.
> >
> > So the problem is this: what would stop subgoals from
> > overthrowing supergoals. How might this happen? The subgoal
> > might determine that to satisfy the supergoal, a coup is
> > just the thing.
>
> This is not the first time I've heard this possibility raised. My answer
> is twofold: First, I've never heard a good explanation of why an
> intelligent subprocess would decide to overthrow the superprocess.
> Second, I've never heard a good explanation of why a transhuman would
> decide to spawn intelligent subprocesses if it involved a major risk to
> cognitive integrity.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT