Complexity of AGI

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 14:08:36 MDT


Eliezer,

I have thought a little about your intuition that an AGI needs to be 1-2
orders of magnitude more complex than Novamente.

It seems to me that there is some threshold T so that the following holds.

IF the complexity of an AGI needs to be > T, THEN it makes sense to focus
efforts on human brain simulation (as advocated by Kurzweil, Eugene Leitl,
and many others), rather than on designing systems loosely inspired by the
human brain/mind.

And, as far as I understand it, your DGI approach is more brainlike than
Novamente, but still "loosely inspired by the human brain-mind" rather than
being a cellular or molecular level simulation.

What is T?

My intuition is that T is around, roughly, 3-5 times the complexity of the
current Novamente design. Beyond this level, the difficulties of
parameter-tuning and engineering and performance analysis are just going to
become WAY too great for any team of humans to handle.

Novamente is now about 30K lines of C++, it will be somewhere between 100K -
300K when done. The total complexity of the algorithms in it probably does
not exceed that of the algorithms in a complex program like an efficient C++
compiler. (Compilers have all sorts of shit in them, graph-coloring
algorithms, conversions between different types of trees, etc. etc. etc.)
However, the algorithms in a compiler are hooked together in a rigid and
predictable way, whereas the algorithms in Novamente are adaptive and self-
and inter-referential, which means that the testing/tuning process for
Novamente is going to vastly exceed that of a compiler (as we discovered in
building and toying with Webmind!!).

The human mind/brain contains a lot of specialized inference, perception and
action modules, dealing with things like spatial and temporal inference,
social reasoning, aspects of language processing, each sensory stream that
we have, etc. etc.

If an AGI has to be engineered to contain *significantly qualitatively
different* code for each of these specialized functional mind-modules, then
I suggest that this AGI is going to be 10-20 times more complex than
Novamente, and hence over my intuitively posited T value. In other words,
rather than build a system like this, which will have so many parameters it
will be un-tunable, we'd be better off to focus on brain scanning and
cellular brain simulation. In my guesswork opinion...

On the other hand, my hypothesis is that we can achieve these specialized
functions by appropriately modifying the parameters of a handful of
individually-narrowly-intelligent, intelligently interacting learning
algorithms. If this is true then something of *borderline manageable
complexity* like Novamente (or, say, A2I2) can work, and we don't
necessarily need to follow the path of detailed human brain simulation.

One may argue that each decade software and hardware tech get better,
enabling us to build more & more complex software systems. It is true. But
we do run up against barriers of human psychology and limitations of human
communication. Novamente is already WAY more complex in its inter-component
interactions than anything ever built... barring an intervining Singularity,
it'll be at least a decade, maybe a few, before software systems of this
complexity are routine in the sense that transaction systems and big OO
systems of other sorts are routine today.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT