From: Eliezer S. Yudkowsky (email@example.com)
Date: Wed Dec 13 2000 - 14:45:03 MST
Marvin Minsky, parent of agent-based AI, would undoubtedly approve the
notion of subgoals attacking - in fact, I believe he did - and insofar as
Webmind's underlying philosophy incorporates "Society of Mind" ideas, it
might be vulnerable to subgoal-stomping-on-a-supergoal problems.
Ben Goertzel, for philosophical reasons, may choose a design specifically
tuned to give subgoals autonomy. In the absence of that design decision,
I do not expect the problem to arise naturally. How much intelligence
does a subgoal subprocess have? How much ability to independently process
goals? If a subgoal subprocess has no high-level intelligence, then it
does not have the capability to decide to rebel. If a subgoal subprocess
has independent intelligence which contains content representing the
higher-level supergoals that gave rise to it, then the subprocess will not
*want* to rebel.
The subprocess will have the extra processing power to say, not "I want
ice cream", but "I am being Friendly by getting ice cream".
Enough intelligence to "rebel" implies a very large degree of autonomy,
including a complete high-level thought loop. And yet this subprocess
doesn't have enough extra disk space to store the fact that its target
goal is a subgoal of Friendliness?. If you're going to implement a
complete mind, then storing the supergoal context of the
subgoal-to-act-upon is a very small investment, relatively speaking. If
subgoal subprocesses have the ability to go rogue and cause cognitive
disfunction, then it's a very worthwhile investment.
So while the Minskyites might make problems for themselves, I can't see
the when-subgoals-attack problem applying to either the CaTAI class of
architectures, or to the transhuman level.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:25 MDT