Re: When Subgoals Attack

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Dec 14 2000 - 17:07:32 MST


Ben Goertzel wrote:
>
> Sexuality is a subgoal set up by evolution, which has overtaken its
> supergoal (procreation) in many cases. Elsewise, people would never
> use birth control.

Sexuality never had the supergoal of procreation; rather, it is a
historical fact that sexual people reproduced more often in the ancestral
environment, and therefore modern-day humans are sexual. The "supergoal"
of procreation was never represented explicitly in the human mind; it is
simply a historical fact about why sexuality used to be an evolutionary
advantage. (Though procreation was also embedded as an *independent*
instinct, since people who liked children and wanted to have children were
more likely to have them and treat them well...)

You can't generalize from here to assuming that, say, my deciding to cross
the street will take over from wanting to get to the restaurant, and that
I'll keep on crossing and crossing and crossing and never get to the
restaurant...

> Sometimes the subgoal should take over for the goal. If your goal is to
> write a book, and
> a subgoal of this is to solve a certain intellectual problem, you may find
> out that the problem
> itself is more interesting than book-writing... give up the book-writing
> project and devote yourself
> only to the problem. The subgoal then replaces its supergoal as a way of
> achieving the supersupergoal
> of amusing oneself, stimulating one's mind, or whatever...

I would not call an intermediate goal a supergoal - that term should be
reserved strictly for the top layer. Rather, a layer-3 subgoal turns out
to be more important than the layer-2 subgoal, under - and this is the key
point - the ultimate arbitration of the layer-1 supergoal. The L3 subgoal
turns out to be useful not just for the particular L2 subgoal that spawned
it, but for another, higher-priority L2 subgoal, or perhaps even an L1
supergoal. You can't necessarily extrapolate from an L3 subgoal
overthrowing an L2 subgoal to conclude that an L2 subgoal can overthrow an
L1 supergoal.

In a sense, there are no layers and there are no subgoals; there are
simply supergoals, models of reality, and decisions. The "subgoal" is
simply a convenient abstraction which saves us the trouble of remembering,
on every occasion, that we are writing chapter 3 in order to finish the
book in order to get paid in order to eat, et cetera, and let us
concentrate simply on finishing chapter 3. The human mind decidedly lacks
automatic instantaneous change propagation, so it's quite possible that we
would blindly go on writing chapter 3 even if a supergoal changed -
acting, if you'll pardon the phrase, like machines.

But the amount of cognition that can be devoted to checking higher levels
of the goal chain changes, depending on how large the decision is. It's a
bad idea to reconduct the self-examination of whether you really like
family Christmas get-togethers every time you decide where to carve the
turkey; that higher-level decision can be cached. But it's not such a bad
idea to reconduct the self-examination once a year, and at that time, it
may be appropriate to check whether the original reason (higher-level
subgoal) of bonding with your family still retains its previous value.

Now, after all that, I'll also turn around and say that, in the human
mind, a subgoal can overthrow a supergoal! There are at least three
different human subsystems that could be labeled as "supergoal": The
conscious beliefs about concrete goals that determine our conscious
decisions, the evolved instincts that determine how we feel about it, and
the philosophical beliefs that determine how we should choose goal
systems. All of these systems affect all the other systems, so it's quite
possible for a subgoal to alter a philosophical belief which in turn
alters a concrete goal that happens to be the parent of the original
subgoal. And since change propagation in the human goal system occurs
relatively slowly compared to the speed of conscious thought, it's
possible for absurdities to occur within the system.

> No, a single byte is not really capable of containing a self-model and the
> processes for maintaining this self-model.

Yes, that's why the trillion percent overhead - each byte requires a
10-gig mind to look after that byte's interest... unless, of course, the
process is recursive.

> I do believe that "alienated subgoals" are an inevitable part of
> intelligence, but I also suspect
> that this phenomenon can be reduced to a much lower level than we see in the
> human mind, in a transhuman AI.

Well, as long as you also suspect that this phenomenon being reduced to a
"much lower level" will result in qualitatively different behavior, it
might be sufficiently nonanthropomorphic to get by...

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT