Re: Review of Novamente

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun May 19 2002 - 18:59:47 MDT


Ben Goertzel wrote:
>
> > Well, I can only criticize what's *in* the book. If in the whole book
> > there's no mention of emergent maps and then you say that you
> > expect most of
> > Novamente's functionality to come from emergent maps, then
> > there's not much I can say about;
>
> Well I did send you an extra chapter on emergent maps, drawn from previously
> written material that hadn't made its way into the book.

And what the chapter contains is descriptions of emergent maps that very
strongly resemble the code-level elements and behaviors. These are things
whose behaviors I am willing to believe you understand, since they are
described precisely in the account of the system's code level. Of course,
the chapter you sent me still does not contain an account of how
*specifically* these maps emerge, so you may not get even those.

I guess my basic thesis is: "If you don't know how to describe how it
emerges, what it does, why it's there, how it contributes to general
intelligence - if you do not, in short, know fully what you are doing and
why - you will not succeed."

There is no hope in this. Whether you can "prove" that something occurs to
someone else may be an unnecessarily high standard, but you should at least
be able to visualize for yourself how it occurs - not hope, or intuit, that
it occurs.

> And, some of the others who read it -- who were more a priori sympathetic
> than you to my overall philosophy of mind -- seemed to be more willing to
> "fill in the gaps" themselves and have had a more positive assessment of the
> design than yourself.

I did fill in the gaps. The result of filling in the gaps was a system that
was potentially more powerful than the sum of its parts - but not nearly
powerful enough to be an AGI. Remember, when I finished, I did send you
that example account of how all the generic subsystems could work together
on the same problem, and then I asked you if that was how maps work. And
you said, "Gee, maybe we should talk about this in the manuscript
somewhere." And I said, "Gee, maybe you should." Cuz the problem with
asking people to fill in the gaps is that they fill in the gaps using
*their* philosophy of mind instead of *your* philosophy of mind.

> There seems to be a fundamental philosophical point here...
>
> I think most of the X's that you're referring to are things that, according
> to our theory of mind, are supposed to be *emergent* phenomena rather than
> parts of the codebase
>
> The book gives a mathematical formalization of the stuff that's supposed to
> be in the codebase

When you say "emergent", I say "higher level of organization". I deeply
distrust your intuitions on this for two reasons:

1) I have to search through a large-space of plausible-looking wrong
answers in order to find even a single answer that looks workable enough to
meet my standards, when trying to figure out how to create a high-level
behavior.

2) I feel that one of the classical psychological pathologies of AI is (a)
not seeing the higher levels of organization, (b) denying they exist, or (c)
hoping that they will emerge for free even though you don't know exactly
how.

This makes me suspicious of a claim that a higher-level behavior emerges
automatically because over here it sure as heck looks like a small target in
design space. It also makes me suspicious of claims that you can get higher
levels of organization without doing all that work. Now, I could be just
spinning my wheels and making unnecessary problems for myself. That's
always a threat. But I think it would take a concrete demonstration or at
least a fully visualized walkthrough to convince me that there is a free
lunch here. Sure, there might be free lunches on some behaviors, but *all*
of them? Why postulate this when we know that evolution is perfectly
capable of multilayered design? Is there any other example in biology where
you can design cells that work well as cells and find that tissues, organs,
and organ systems all emerge automatically? Why would you *expect* a free
lunch here?

> > Why did it take so long to scale from spider brains to human brains?
>
> 'Cause evolution is a terribly inefficient learning mechanism ;>

I don't think this is enough to explain it. Under DGI it's pretty clear why
you have to go through chimpanzees in order to incrementally evolve human
intelligence (and I spend time discussing it in the paper). Given the
Novamente theory in which all higher levels of cognition emerge naturally
from a small set of lower-level behaviors, there is no obvious (to me)
reason why the Novamente behaviors would not be incrementally evolvable, nor
any obvious reason why spider brains would not incrementally scale to human
size and capabilities. Is there a reason why the Novamente design - not
just as it is now, but for all plausible variations thereoff - is
unevolvable?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT