RE: Review of Novamente

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 04 2002 - 11:41:28 MDT


Hi,

Here is a brief reaction to Eli's reaction to Novamente.

A fairly high-level overview of NOvamente is at www.realai.net/article.htm

What Eliezer read was a book-length rough-draft overview of the design.
I've distributed this manuscript only to a handful of people because it's in
very crude rough form and in need of many months of editing and rewriting
and improving.

Unfortunately, I think that Eliezer did not really understand the basic
concepts underlying the design, based on his reading of the manuscript.
Obviously, since Eliezer is very smart and has a fair bit of relevant
knowledge, this means that the book manuscript is in piss-poor shape. We
should have a much better draft within 6 months or so. My feeling is that
Eliezer's understanding of the desing was impaired significantly by his
strong philosophical biases which are different from my own strong
philosophical biases.

To sum up before giving details, basically, Eliezer's critique is that

1) he doesn't see how a collection of relatively simple, generic processes
working together can give rise to a rich enough set of emergent dynamics and
structures to support AGI

2) he doesn't think it's sensible to create a network *some of whose basic
nodes and links have explicit semantic meaning*, but whose basic cognitive
dynamics is based on *emergent meaning resident in patterns in the basic
node-and-link-network"

Since I can't prove he's wrong or I'm right on these points, I guess it's
just gonna remain a difference of intuition for a while.

One nice thing about this sort of work is that it's empirical. Assuming the
team holds together, we will finish implementing and testing the mofo and
see if we're right or wrong.

> My overall reaction is that Novamente is much, much simpler than
> I had been
> visualizing from Ben's descriptions;

Actually we have been explicitly *striving* for simplicity. Webmind was
more complex with more functionally specialized parts. I look at the
greater simplicity of Novamente as an advantage. Of course, the design is
highly flexible so that we can create greater specialization if it's needed.

This is a philosophical difference however. You seem to believe that an AI
design has to be very complicated. I think Novamente is still too
complicated, and that in a good design, a heck of a lot of the complexity of
mind should emerge rather than being part of the explicit design. Of
course the design has to be made with the proper sorts of emergence
explicitly in mind, and one of the many shortcomings of the current
Novamente manuscript version is that it doesn't focus on this enough.

> Capsule description of Novamente's architecture: Novamente's core
> representation is a semantic net, with nodes such as "cat" and "fish", and
> relations such as "eats". Some kind of emotional reaction is called for
> here, lest others suspect me of secret sympathies for semantic networks:
> "AAAARRRRGGGHHH!" Having gotten that over with, let's forge ahead.

This is not a correct statement; the core data representation is not a
semantic network.

It is a NETWORK, with nodes and links. Some nodes and links may have
transparent semantic meaning, such as "cat" or "eats". Others -- the vast
majority -- will not. And if a node has a transparent meaning like "cat",
this meaning (and the node) must be built by the system, not loaded in
externally.

The intention is that much of the semantics of the system resides, not
directly in individual nodes and links, but rather in "maps" or
"attractors" -- patterns of connectivity and interconnection involving large
numbers of nodes and links.

Quite explicitly, the node-and-link structure and dynamics aims to combine
aspects of semantic networks and neural networks.

It is not a semantic network according to standard definitions, far from it,
because the majority of nodes do not have any individual semantic meaning
easily translatable into English or any other human language.

It may be too close to semantic networks for your taste, but this does not
make it a semantic network.

> Novamente's core representation is not entirely that of a
> classical AI; Ben
> insists that it be described as "term logic" rather than
> "predicate logic",
> meaning that it has quantitative truth values and quantitative attention
> values (actually, Novamente can express more complex kinds of truth values
> and attention values than simple quantities).

Okay, there two different confusions in this paragraph.

1) Logical inference is only one among very many dynamics involved in
Novamente. "Term logic" is not a representation, it is a way of combining
some links to form new links. The node-and-link representation is designed
to support probabilistic term logic among many other important dynamics.

2) The difference between predicate logic and term logic has nothing to do
with the use of probabilistic truth values. The difference between
predicate logic and term logic has to do with the structure of the inference
rules involved. In term logic two statements can only be combined if they
share common terms; this is not true in predicate logic. This difference
has a lot of philosophical implications: it means that term logic is not
susceptible to the same logical paradoxes as predicate logic, and that term
logic is better suited for implementation in a distributed self-organizing
knowledge system like Novamente.

> Similarly, Novamente's
> logical inference processes are also quantitative; fuzzy logic rather than
> theorem proving.

Again there are two different confusions overlaid.

First, "Fuzzy logic" in the technical sense has no role in Novamente.

Next, there is a whole chapter in the manuscript on theorem-proving. I
think this is one thing the system will eventually be able to do quite well.
In fact, I think that probabilistic inference and other non-inferential
cognitive aspects like evolutionary concept creation and
association-formation, are highly critical to mathematical theorem-proving.

And I think that expertise at theorem-proving will be an important partway
step towards intelligent goal-directed self-modification. There was an SL4
thread on the possible use of the Mizar theorem/proof database for this
purpose, about a year ago.

>> However, from my perspective, Novamente has very *simple* behaviors for
> inference, attention, generalization, and evolutionary programming.

We have tried to simplify these basic cognitive processes as much as
possible.

The complexity of cognition is intended to emerge from the self-organizing
interaction of the right set of simple processes on a large set of
information. NOT from complexity of the basic behaviors.

> For
> example, Novamente notices spontaneous regularities by handing off the
> problem to a generic data-mining algorithm on a separate server. The
> evolutionary programming is classical evolutionary programming.
> The logical
> inference has classical Bayesian semantics. Attention spreads
> outward like
> ripples in a pond.

All of these statements are wrong, Eliezer.

Novamente notices regularities in internal and external by many different
mechanisms. The Apriori datamining algorithm that you mention is a simple
preprocessing technique used to suggest potentially interesting regularities
to the main cognition algorithms. It is by no means the sum total or even
the centerpiece of the system's approach to recognizing regularities.

The evolutionary programming in Novamente is not classical ev. programming;
it has at least two huge innovations (only one of which has been tested so
far): 1) evolution is hybridized with probabilistic inference, which can
improve efficiency by a couple orders of magnitude, 2) evolution takes place
on node-and-link structures interpretable as combinatory logic expressions,
which means that functions with loops and recursion can be much more
efficiently learned (this is not yet tested). These may sound like small
technical improvements, but they are specifically improvements that allow
ev. prog. to become smarter & more effective thru feedback with other parts
of the mind.

The logical inference system does not have classical Bayesian semantics, not
at all. No single consistent prior or posterior distribution is assumed
over all knowledge available to the system. Rather, each individual
inference constructs its own distributions prior to inference. This means
that the inference behavior of the system as a whole involves many
overlapping pdf's rather than one big pdf. This is just NOT classical
Bayesian semantics in any sense, sorry.

> Novamente does not have the complexity that
> would render
> these problems tractable; the processes may intersect in a common
> representation but the processes themselves are generic.

If by "generic" you mean that Novamente's basic cognitive processes are not
functionally specialized, you are correct.

And I think this is as it should be.

> Ben believes that Novamente will support another level of
> organization above
> the current behaviors, so that inference/attention/mining/evolution of the
> low level can support complex constructs on the high level. While I
> naturally agree that having more than one level of organization is a step
> forward, the idea of trying to build a mind on top of low-level behaviors
> originally constructed to imitate inference and attention is... well,
> Novamente is already the most alien thing I've ever tried to wrap my mind
> around;

I am afraid that, because the description you read was a very sloppy rough
draft, and because the design is so intuitively alien to you, you have
managed to achieve only a very partial understanding of the system. Many
things that, to me, are highly conceptually and philosophically significant,
you seem to pass off as "implementation details" or "tweaks to existing
algorithms."

> if Novamente's current behaviors can give rise to full
> cognition at
> higher levels of organization, it would make Novamente a mind so
> absolutely
> alien that it would make a human and a Friendly AI look like
> cousins.

Yes, I agree, if Novamente becomes a mind it will be a very alien mind. We
are not trying to emulate human intelligence, not at all. Equal and
surpass, but not emulate.

To emulate human intelligence on a digital computer, we need: a) way bigger
computers, b) way more understanding of how the brain works.

The only hope for the short run, in my view, is to seek to build a very
alien intelligence, one that exploits the unique power of digital computers
rather than trying to emulate the brain and its dynamics in any detail.

> The lower
> levels of Novamente were designed with the belief that these lower levels,
> in themselves, implemented cognition, not with the intent that these low
> levels should support higher levels of organization.

This is completely untrue. You were not there when we designed these
levels, so how on Earth can you make this presumption??

I spent the 8 years before starting designing Webmind, writing books and
paper on self-organization and emergence in the mind. (See especially
Chaotic Logic and From Complexity to Creativity)

OF COURSE, I did not design the lower levels of the system without the
emergence of a higher level of structure and dynamics as a key goal.

> For example, Ben has
> indicated that while he expects high-level inference on a
> separate level of
> organization to emerge above the current low-level inferential
> behaviors, he
> believes that it would be good to summarize the high-level patterns as
> individual Novamente nodes so that the faster and more powerful low-level
> inference mechanisms can operate on them directly.

I think that the automated recognition *by the system* of high-level
patterns in the system's mind, and the encapsulation of these patterns in
individual nodes, is *one valuable cognitive heuristic* among many.

The interplay between the concretely implemented structures/dynamics and the
emergent ones, in Novamente, is going to be quite complex and interesting.
This is where the complexity SHOULD lie, not at the level of the basic
implemented structures and dynamics.

> To see a genuine AI capability, you have to strip away the suggestive
> English names and look at what behaviors the system supports even
> if nobody
> is interpreting it. When I look at Novamente through that lens, I see a
> pattern-recognition system that may be capable of achieving limited goals
> within the patterns it can recognize, although the goal system currently
> described (and, as I understand, not yet implemented or tested)

Webmind's goal system was implemented and tested, Novamente's is not (yet).

> would permit
> Novamente to achieve only a small fraction of the goals it should
> be capable
> of representing. Checking with Ben confirmed that all of the old Webmind
> system's successes were in the domain of pattern recognition, so
> it doesn't
> look like my intuitions are off.

Yes, we were developing Webmind in the context of a commercial corporation,
and so most of our practical testing concerned pragmatic data analysis
tasks. This doesn't mean that the architecture was designed to support ONLY
this kind of behavior, nor even that it was the most natural stuff for us to
be doing, in AI terms. In fact, we ended up using the system for a lot of
"text analysis" work that it was really relatively *ill-suited* for, because
that was what the business's products needed. (And the system performed
well at text analysis, even though this really wasn't an appropriate
application for it at that stage of its development).

Developing AI in a biz context has its plusses and minuses. The big plus is
plenty of resources. The big minus is that you get pushed into spending a
lot of time on applications that distract the focus from real AI.

> By the standards I would apply to real AI, Novamente is
> architecturally very
> simple and is built around a relative handful of generic
> behaviors; I do not
> believe that Novamente as it stands can support Ben's stated goals of
> general intelligence, seed AI, or even the existence of substantial
> intelligence on higher levels of organization.

You are right: Novamente is architecturally relatively simple and is built
around a relative handful of generic behaviors.

It is not all THAT simple of course: it will definitely be 100,000-200,000
lines of C++ code when finished, and it involves around 20 different mental
dynamics. But it is a lot simpler than Eliezer would like. And I think its
*relative* simplicity is a good thing.

I suspect that an AI system with 200 more specialized mental dynamics,
rather than 20 generic ones, would be effectively impossible for a team of
humans to program, debug and test. So: Eliezer, I think that IF you're
right about the level of complexity needed (which I doubt), THEN Kurzweil is
also right that the only viable approach to real AI is to emulate human
brain-biology in silico. Because I think that implementing a system 10
times more complex than Novamente via software engineering rather than
brain-emulation is not going to be feasible.

Anyway, I do not claim to have proved that Novamente will lead to seed AI or
AGI. Obviously, right now, whether it will or will not is largely a matter
of intuition.

However, I should add that I am not the *only* person on Earth who believes
Novamente has a fighting chance at achieving its goals; there are at least a
dozen others who understand the design pretty well and feel as I do. So my
intuition is not a unique one.

The book draft that I sent Eliezer to read was really quite rough and hard
to understand. It is obvious from the comments in his e-mail to SL4 that he
missed a few rather basic points, for which I blame myself for not writing a
better book (though it will be much better before we distribute it widely).
However, I suspect that even when the book is in great form, Eliezer still
won't like the design, because he has radically different intuitions than me
about what an AGI design should look like.

And I think that's just fine.

I look forward to, one day in the future, Eliezer sending *me* a detailed
description of his own design for an AGI/seed-AI, so I can tell him why
according to my intuition, his design can't possibly work ;> Or maybe not,
maybe I'll be convinced, who knows!!

I definitely don't claim that Novamente is the ONLY path to AGI/seed-AI. In
spite of Eliezer's criticisms, I still believe it is *a* feasible path. And
I think that Peter Voss's approach *may be* a path -- I don't know all the
details of his work, and I haven't thought nearly as hard about his approach
as about my own.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT