DGI, Novamente, AGI,...

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 16:45:12 MDT


> My intuition is that simulating a working brain without
> understanding the mind, a la Kurzweil and Leitl, will turn out to
> require an
> insanely detailed simulation (down to the microtubular level, perhaps) to
> ensure that all necessary functional qualities of neurons are duplicated
> when the researchers don't know, in fact, what the functional qualities of
> neurons are. This entire scenario seems to me to be built around
> Kurzweil's
> desire to convince an audience of the workability of transhuman
> intelligence
> without Kurzweil having to defend the idea that anyone will ever
> comprehend
> intelligence. It is not futuristically plausible. Kurzweil is
> (unconsciously, I assume) optimizing for convenience of argument, not
> faithfulness to the real world.

Well, our intuitions differ here pretty substantially.

I don't *know* how the brain works, nobody does, of course.

My guess is that a simulation at the cellular level, with some attention to
extracellular diffusion of charge and neurotransmitter chemistry, can do the
trick.

In other words, I think that a simulation of the brain can be achieved via a
good understanding of brain cells, their interconnections, and the chemical
reactions mediating their electrical interactions. I think this can be
done, potentially, WITHOUT anywhere near a full understanding of how
thoughts, ideas, fears, dreams, insights and psychoses come out of the
brain. Of course, *some* understanding of the cognitive meanings of neural
structures/dynamics will be needed, but how much?

> In real life, the researchers would start to see what the neural networks
> are doing and why long before you have the capability to run a simulation
> perfect enough that the scan works whether or not you know what
> the networks
> are doing. Could we eventually simulate networks so perfectly that they
> worked without our understanding their higher functions? Yes. But that's
> an existence proof, not a prediction. It's not how the future would
> actually develop.

I'm not sure; the problem of inferring details of thought dynamics from
brain dynamics is VERY hard. This is an area I work actively in --
inferring complex nonlinear dynamics from numerous coupled time series. I
believe the problem can be solved partially by "narrow AI" (by systems like
the current novamente version, which may be the most sophisticated existing
system for solving this kind of "inferring the dynamics from the data"
problem), but it's certainly not a no-brainer!!

Of course, you may have a much lower estimate than I do of the *dynamical*
complexity involved in getting mindstuff out of brainstuff.

> Large corporations routinely build systems with hundreds of times as many
> lines of code as Novamente.

Yes, of course, Windows 2000 has millions of lines of code.

This is why I did not compare Novamente to other systems in terms of lines
of code, but in terms of

a) number of different interacting algorithms

b) complexity of interaction of different algorithms

I said that

a) I think Novamente has about as many different interacting algorithms as a
modern C++ compiler

b) I think the complexity of interaction of the algorithms in Novamente is
far greater than in existing software systems, including NT with its 50
million lines of code

It is the complexity of interaction between different component algorithms
which makes systems like this hard to test, debug and parameter-tune, not
lines of code, and not evern raw "number of different interacting
algorithms."

Sorry that I was not clearer in stating my views on this.

> Also, I happen to feel that incorrect AI
> designs contribute nontrivially to the amount of work that gets dumped on
> parameter-tuning, engineering, and performance analysis.

Why do you think a correct AI design would not require substantial
parameter-tuning, performance analysis and engineering?

The human brain clearly has required very substantial "parameter-tuning"
over an evolutionary time-scale, and it has also been "performance tuned" by
evolution in many ways.

In all other software systems that I know of, "complexity of interaction of
different algorithms" is a good indicator of the amount of parameter-tuning,
subtle engineering design, and complicated performance analysis that has to
be done.

So, why do you think a "correct" AGI design would avoid these issues that
can be seen in the brain and in all other complex software systems?

>From what I know of DGI, I think it's going to require an incredible amount
of subtle performance analysis and engineering and parameter tuning to get
it to work at all. Even after you produce a detailed design, I mean. If
you can produce a detailed design based on your DGI philosophy, that will be
great. If you can produce such a design that

-- can be made efficient without a huge amount of engineering effort and
performance analysis, and

-- has a small number of free parameters, the values of other parameters
being give by theory

THEN, my friend, you will have performed what I would term "One Fucking Hell
of a Miracle." I don't believe it's possible, although I do consider it
possible you can make a decent AI design based on your DGI theory.

In fact, I think that the engineering and performance analysis problems are
likely to be significantly GREATER for a DGI based AI design than for
Novamente, because, DGI makes fewer compromises to match itself up to the
ways and means of contemporary computer hardware & software frameworks.

> Imagine Lenat saying, "Well,
> suppose that you
> need to enter a trillion facts into the system... in this case it
> would make
> sense to scan an existing human brain because no programming team could
> handle the engineering challenge of managing relationships among a dataset
> that large."

But this is the worst example you could have possibly come up with! Cyc is
very easy to engineer precisely because it makes so many simplifying
assumptions.

In almost all cases, I believe, incorrect AI theories have led to overly
SIMPLE implementation designs, not overly complex ones. AI scientists have
VERY often, it seems to me, simplified their theories so they would have
theories that could be implemented without excessive implementation effort
and excessive parameter tuning.

> Of course, it's hard for me to see in advance what will turn out to be the
> real, unexpected critical challenges of building DGI. But I suspect that
> when the pieces of a correct AI design are hooked together, 90% of the
> humanly achievable functionality will take 10% of the humanly possible
> tuning. In other words, I think that the tremendous efforts you put into
> tuning Webmind are symptomatic of an AI pathology.

In almost all cases, we do parameter tuning *automatically*, by using
optimization methods to tune parameters. We then have to tune the
parameters of the optimization methods themselves by hand, but this is
rarely difficult.

The problem is that when you have a VERY LARGE number of INTERRELATED
parameters, the problem of automated parameter adaptation via optimization
algorithms becomes too formidable. This is what happens when a system is
too complex in terms of the subtlety of interaction between different
algorithms.

Webmind did have this problem, and I think Novamente will not, because it's
a simpler system in many ways.

I'm afraid you are fooling yourself when you say that parameter tuning will
not be a big issue for your AI system.

Even relatively simple AI models like attractor neural nets require a lot of
parameter tuning. Dave Goldberg has spent years working on parameter tuning
for the GA. Of course, you can claim that this is because these are all bad
techniques and you have a good one up your sleeve. But I find it hard to
believe you're going to come up with the first-ever complex computational
system for which parameter-tuning is not a significant problem.

Yes, a sufficiently advanced system can tune its own parameters, and
Novamente does this in many cases; but intelligent adaptive self-tuning for
a very complex system presents an obvious bootstrapping problem, which is
trickier the more complex the system is.

> That is not the kind of specialized complexity that goes into creating a
> DGI-model AI. Computational systems give rise to cognitive talents;
> cognitive talents combine with experiential content to give rise to domain
> competencies.

Sure, this is going to be the case for *any* feasible AGI system

> DGI does not contain *more specialized
> versions* of these subsystems that support specific cognitive
> talents, which
> is what you seem to be visualizing, but rather contains a *completely
> different* set of underlying subsystems whose cardinality happens to be
> larger than the cardinality of the set of Novamente subsystems.

Can you give us a hint of what these underlying subsystems are?

Are they the structures described in the DGI philosophy paper that you
posted to this list, or something quite different?

> I believe this problem is an AI pathology of the Novamente architecture.
> (This is not a recent thought; I've had this impression ever
> since I visited
> Webmind Inc. and saw some poor guy trying to optimize 1500
> parameters with a
> GA.)

Webmind had about 300 parameters, if someone told you 1500 they were goofing
around.

However, only about 25 of them were ever actively tuned, the others were set
at fixed values.

Adding in the unimplemented parts of Webmind might have doubled these
numbers, because we left some of the most complex stuff for last.

I sure am eager to see how DGI or *any* AGI system is going to avoid this
sort of problem.

A2I2 is pretty simple now -- there are many parameters but most of them I'm
sure can be kept at fixed values. However, they may face this sort of issue
when they try to go beyond the digital cockroach level and build a more
diversified artificial neural net structure.

> > we'd be better off to focus on brain scanning and
> > cellular brain simulation.
>
> That doesn't help.

Your extreme confidence in this regard, as in other matters, seems
relatively unfounded.

Many people with expertise in brain scanning and biological systems
simulation disagree with you.

> Novamente has what I would consider a flat architecture, like
> "Coding a Transhuman AI" circa 1998. Flat architectures come with certain
> explosive combinatorial problems that can only be solved with deep
> architectures. Deep architectures are admittedly much harder to
> think about
> and invent.

"Deep architecture" is a cosmic-sounding term; would you care to venture a
definition? I don't really know what you mean, except that you're implying
that your ideas are deep and mine are shallow.

My own subjective view, not surprisingly, is that YOUR approach is
"shallower" than mind, in that it does not seem to embrace the depth of
dynamical complexity and emergence that exists in the mind. You want to
ground concepts too thoroughly in images and percepts rather than accepting
the self-organizing, self-generating dynamics of the pool of intercreating
concepts that is the crux of the mind. I think that Novamente accepts this
essential depth of the mind whereas DGI does not, because in DGI the concept
layer is a kind of thin shell sitting on top of perception and action,
relying on imagery for most of its substance.

The depth of the Novamente design lies in the dynamics that I believe (based
on intuition not proof!) will emerge from the system, not in the code
itself. Just as I believe the depth of the human brain lies in the dynamics
that emerge from neural interactions, not in the neurons and
neurotransmitters and glia and so forth. Not even the exalted
microtubules!!

> It requires that you listen to your quiet, nagging
> doubts about
> shallow architectures and that you go on relentlessly replacing
> every single
> shallow architecture your programmer's mind invents, until you
> finally start
> to see how deep architectures work.

Ah, how my colleagues would laugh to see you describe me as having a
"programmer's mind" !!!

For sure, I am at bottom a philosopher, much as you are I suspect. You may
disagree with my philosophy but the fact remains that I spent about 8 years
working on mathematically and scientifically inspired philosophy (while also
doing various scientific projects), before venturing to design an AGI.
Novamente is not at all a programming-driven AI project, although at this
stage we are certainly using all the algorithmic and programming tricks we
can find, in the service of the design. The design was inspired by a
philosophy of mind, and is an attempt to realize this philosophy of mind in
a practical way using contemporary hardware and software. The design may
fail to realize the philosophy, which will not invalidate the philosophy.
My philosophy of mind (see Chaotic Logic and From Complexity to Creativity)
does not tell you whether Novamente will lead to the emergences I think it
will; this would require a mathematics of mind, not just a philosophy with
some mathematical inspiration.

> I'm sorry, Ben, but I don't think that Novamente lies right at the fringes
> of the most complex systems that are humanly comprehensible. Different
> people will have different ideas of what constitutes "depth
> beyond the human
> ability to comprehend". I don't see how you can know what's too deep for
> humans to comprehend, anyway; all information available is of the
> form "X is
> too deep for me to comprehend at my current level of skill".

You seem to have misinterpreted me. I am not talking about anything being
in principle beyond human capability to comprehend forever. Some things ARE
(this is guaranteed by the finite brain size of the human species), but
that's not the point I'm making.

What I am talking about is the set of things that are humanly comprehensible
*before than a detailed brain simulation can be implemented*. I believe a
detailed human brain simulation will be achievable within 30 years or so,
and therefore, in my view, only software systems that will be humanly
comprehensible and constructable *before this* can be considered competitive
with the detailed brain simulation approach.

I still believe it's possible that the AGI design problem is SO hard that
detailed brain simulation is easier. I hope this isn't true, but if pressed
I'd give it a 10%-20% chance of being true. Generally, I am not prone to
the near 100% confident judgments that you are, Eliezer. I think I tend to
be more aware of the limitations of my own knowledge and cognitive ability,
than you are of your own corresponding limitations.

Of course, if this is true, then the Novamente work is still pretty
worthwhile. I have a nice datafile of MEG brain scan data on my hard drive
waiting for Novamente analysis ;>

> I think you'd be better off if you stopped thinking of some level of
> complexity as "too difficult" and started thinking of that level of
> complexity as "my responsibility, my challenge; the work of Evolution, my
> rival and target." I find that quite a number of things
> supposedly "beyond
> human ability" are so-called because people use the phrase "beyond human
> ability" when they mentally flinch away from the prospect of having to do
> something.

Eliezer, I think it is rather funny for *you* to accuse *me* of flinching
away from the prospect of trying to do something!

In the late 90's, after many years of thought, I decided to try to implement
the best AGI design I could think of, because I decided that, while far from
ideal, it had a pretty decent chance of working. You disagree and think
that Novamente has a very small chance of working. Fine. But I did not
choose to implement Novamente because of any fear of implementing a more
complicated system. Rather, it was a pragmatic decision. I decided that
anything significantly MORE complicated would have a LESSER chance of
succeeding for practical reasons. On the other hand, anything much simpler
(like A2I2), in my view, isn't going to be efficient on current hardware and
software.

It seems to me that perhaps it is YOU who are flinching away from the
prospect of having to do something!! Where is the design for the DGI-based
AI system? Where is the prototype code?

I may have different ideas than you, but I am certainly not flinching away
from taking the actions I believe are correct.

I am not taking the actions *you* think are correct, not out of fear, but
out of disagreement!! Furthermore, I am not a dogmatic person and it is
quite possible for me to be convinced I'm wrong. So far your criticisms of
Novamente have convinced me that I've done a mediocre job of *explaining
why* I think it will be a real AGI, but have not caused me to doubt my own
intuitions. But maybe I'll see your detailed DGI design and decide it has
an even better chance than Novamente of succeeding -- who knows. Based on
the DGI philosophy paper, I doubt this will happen, because I think the
concept level needs to have a lot more freedom and dynamical complexity; but
maybe this freedom and dynamical complexity will be implicit in your design
in some way even though you didn't emphasize it in your philosophical
write-up. I look forward to seeing the details one of these years and
finding out ;->

yours,
Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT