Re: Complexity of AGI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun May 19 2002 - 15:23:13 MDT


Ben Goertzel wrote:
>
> Eliezer,
>
> I have thought a little about your intuition that an AGI needs to be 1-2
> orders of magnitude more complex than Novamente.
>
> It seems to me that there is some threshold T so that the following holds.
>
> IF the complexity of an AGI needs to be > T, THEN it makes sense to focus
> efforts on human brain simulation (as advocated by Kurzweil, Eugene Leitl,
> and many others), rather than on designing systems loosely inspired by the
> human brain/mind.

I fail to see why, if a problem is beyond human understanding, it can be
solved by building systems in which we don't even know what the pieces are
doing or why. Learning to build brain simulations will involve better
understanding of the function of pieces of the brain, not just the capable
of running finer and finer simulations of neural networks which we don't
understand. My intuition is that simulating a working brain without
understanding the mind, a la Kurzweil and Leitl, will turn out to require an
insanely detailed simulation (down to the microtubular level, perhaps) to
ensure that all necessary functional qualities of neurons are duplicated
when the researchers don't know, in fact, what the functional qualities of
neurons are. This entire scenario seems to me to be built around Kurzweil's
desire to convince an audience of the workability of transhuman intelligence
without Kurzweil having to defend the idea that anyone will ever comprehend
intelligence. It is not futuristically plausible. Kurzweil is
(unconsciously, I assume) optimizing for convenience of argument, not
faithfulness to the real world.

In real life, the researchers would start to see what the neural networks
are doing and why long before you have the capability to run a simulation
perfect enough that the scan works whether or not you know what the networks
are doing. Could we eventually simulate networks so perfectly that they
worked without our understanding their higher functions? Yes. But that's
an existence proof, not a prediction. It's not how the future would
actually develop.

> My intuition is that T is around, roughly, 3-5 times the complexity of the
> current Novamente design. Beyond this level, the difficulties of
> parameter-tuning and engineering and performance analysis are just going to
> become WAY too great for any team of humans to handle.

Large corporations routinely build systems with hundreds of times as many
lines of code as Novamente. Also, I happen to feel that incorrect AI
designs contribute nontrivially to the amount of work that gets dumped on
parameter-tuning, engineering, and performance analysis. Among the
complexity I think you're missing out on is a lot of the complexity that
goes into managing complexity. An AI that is built wrongly will present the
wrong engineering challenges. Imagine Lenat saying, "Well, suppose that you
need to enter a trillion facts into the system... in this case it would make
sense to scan an existing human brain because no programming team could
handle the engineering challenge of managing relationships among a dataset
that large."

> Novamente is now about 30K lines of C++, it will be somewhere between 100K -
> 300K when done. The total complexity of the algorithms in it probably does
> not exceed that of the algorithms in a complex program like an efficient C++
> compiler. (Compilers have all sorts of shit in them, graph-coloring
> algorithms, conversions between different types of trees, etc. etc. etc.)
> However, the algorithms in a compiler are hooked together in a rigid and
> predictable way, whereas the algorithms in Novamente are adaptive and self-
> and inter-referential, which means that the testing/tuning process for
> Novamente is going to vastly exceed that of a compiler (as we discovered in
> building and toying with Webmind!!).

Of course, it's hard for me to see in advance what will turn out to be the
real, unexpected critical challenges of building DGI. But I suspect that
when the pieces of a correct AI design are hooked together, 90% of the
humanly achievable functionality will take 10% of the humanly possible
tuning. In other words, I think that the tremendous efforts you put into
tuning Webmind are symptomatic of an AI pathology.

> The human mind/brain contains a lot of specialized inference, perception and
> action modules, dealing with things like spatial and temporal inference,
> social reasoning, aspects of language processing, each sensory stream that
> we have, etc. etc.
>
> If an AGI has to be engineered to contain *significantly qualitatively
> different* code for each of these specialized functional mind-modules, then
> I suggest that this AGI is going to be 10-20 times more complex than
> Novamente, and hence over my intuitively posited T value.

That is not the kind of specialized complexity that goes into creating a
DGI-model AI. Computational systems give rise to cognitive talents;
cognitive talents combine with experiential content to give rise to domain
competencies. The mapping from computational subsystems to cognitive
talents is many-to-many. Likewise the mapping from talents to
competencies. Novamente has what I consider to be a too-limited set of
basic computational subsystems. DGI does not contain *more specialized
versions* of these subsystems that support specific cognitive talents, which
is what you seem to be visualizing, but rather contains a *completely
different* set of underlying subsystems whose cardinality happens to be
larger than the cardinality of the set of Novamente subsystems. I agree
that an AI built using Novamente's basic architecture, with lots of
specialized versions of Novemante's basic generic processes, would multiply
Novamente's difficulties.

> In other words,
> rather than build a system like this, which will have so many parameters it
> will be un-tunable,

I believe this problem is an AI pathology of the Novamente architecture.
(This is not a recent thought; I've had this impression ever since I visited
Webmind Inc. and saw some poor guy trying to optimize 1500 parameters with a
GA.)

> we'd be better off to focus on brain scanning and
> cellular brain simulation.

That doesn't help.

> On the other hand, my hypothesis is that we can achieve these specialized
> functions by appropriately modifying the parameters of a handful of
> individually-narrowly-intelligent, intelligently interacting learning
> algorithms. If this is true then something of *borderline manageable
> complexity* like Novamente (or, say, A2I2) can work, and we don't
> necessarily need to follow the path of detailed human brain simulation.

These aren't the only two options. There is more to the universe than
generic algorithms and generic algorithms lightly specialized for particular
domains. Novamente has what I would consider a flat architecture, like
"Coding a Transhuman AI" circa 1998. Flat architectures come with certain
explosive combinatorial problems that can only be solved with deep
architectures. Deep architectures are admittedly much harder to think about
and invent. It requires that you listen to your quiet, nagging doubts about
shallow architectures and that you go on relentlessly replacing every single
shallow architecture your programmer's mind invents, until you finally start
to see how deep architectures work.

> One may argue that each decade software and hardware tech get better,
> enabling us to build more & more complex software systems. It is true. But
> we do run up against barriers of human psychology and limitations of human
> communication. Novamente is already WAY more complex in its inter-component
> interactions than anything ever built... barring an intervining Singularity,
> it'll be at least a decade, maybe a few, before software systems of this
> complexity are routine in the sense that transaction systems and big OO
> systems of other sorts are routine today.

I'm sorry, Ben, but I don't think that Novamente lies right at the fringes
of the most complex systems that are humanly comprehensible. Different
people will have different ideas of what constitutes "depth beyond the human
ability to comprehend". I don't see how you can know what's too deep for
humans to comprehend, anyway; all information available is of the form "X is
too deep for me to comprehend at my current level of skill".

I think you'd be better off if you stopped thinking of some level of
complexity as "too difficult" and started thinking of that level of
complexity as "my responsibility, my challenge; the work of Evolution, my
rival and target." I find that quite a number of things supposedly "beyond
human ability" are so-called because people use the phrase "beyond human
ability" when they mentally flinch away from the prospect of having to do
something.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT