RE: Seed AI milestones (was: Microsoft aflare)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Feb 27 2002 - 09:55:03 MST


[I'm going to reply to your other message properly, i.e. at home. This is
quick reply from work]

On Wed, 27 Feb 2002, Ben Goertzel wrote:

> I have never understood your "complexity barrier" argument as anything
> but an intuition.

No, merely expressing a well known truth about software engineering.. And
not only about software engineering, engineering in general. Minds are one
thing: complex. Look at the hardware layer, it is that very obviously. And
we're getting increased evidence that these structures are indeed doing a
lot, are not very reducible. This is not an intuition, I can cite you a
few papers indicating that there's not much averaging going on. Each
incoming bit of info from the trenches makes me lose any residual optimism
I had.

There's a bareer to the complexity of a system you can build as a single
person. Different persons have different ceilings, mine is quite low.
Teams do not really scale in that regard. A ceiling of a group is not
dramatically higher than of a single individual, and the ceiling of a
large group can be actually lower. This is basic software engineering
knowledge.

General intelligence is not property of a simple system. Far from it.
As a result I predict that human software engineers coding an AI
explicitly (i.e. using not stochastic/noisy/evolutionary methods) are
going to fall short of the goal.

> I agree that goal-directed self-modification is a specialized mental
> function, similar (very roughly speaking) to, say, vision processing, or
> mathematical reasoning, or social interaction. However, also like these
> other things, it will be achieved by a combination of general
> intelligence processes with more specialized heuristics.

Am I correct to assume that we're talking about explicit codification of
knowledge destilled from human experts? Is there any reason to suspect
that we're going to do any better than Lenat & Co? The record track so far
is not overwhelming.

> I think you are wrong about losing "orders of magnitude" of performance.
> If you have any detailed calculations to back up this estimate, please
> share them.

I don't have to cite any detailed calculations, citing benchmarks results
would seem to be enough. Current architectures are not all-purpose. As the
result careful tweaks and measurements for a given chunk of code must be
conducted, before it exploits the hardware optimally (while being remote
from a hand-coded solution by a competent human). We're talking compiled C
here. I'm not mentioning holistic aspects, as the impact of the messagin
latency or choice of the right device driver/kernel version, which can
most assuredly kill performance. We're talking dumping some high-level
language into C (not C++, because improper use of C++ is a sure
performance killer), by a system completely agnostic about its deepr
layers.

At *least* two orders of magnitude. Probably more.

> My own experience, based on prototyping in this space for a while, is that
> you will lose about order of magnitude of performance by doing self-
> modification in a *properly optimized* high-level language rather than in a
> low-level language like C++.

Um, C++ can easily lose an order of magnitude of performance over C, if
you don't know what you're doing. Not to mention tweaking the compiler
flags, and jiggle the code, trying to not run into performance killers
(longword align, for instance).

We seem to mean very different systems, when speaking high-performance.
The absence of high-performance computing types in AI is notable.

> Our first self-modification experiments in Novamente (our new AI system,
> the Webmind successor) will not involve Novamente rewriting its C++
> source, but rather Novamente rewriting what we call "schema" that
> control its cognitive functioning (which are equivalent to programs in
> our own high-level language, that we call Sasha (named after our
> departed collaborator Sasha Chislenko)).

You will let us know, how well self modification will do, will you? This
is a genuinely interesting experiment.

> In the first-draft Novamente schema module, executing a schema will
> about 2 orders of mag. slower than executing an analogous C++ program,
> but this is because the first-draft schema module will not embody
> sophisticated schema optimization procedures. We have a fairly detailed
> design for a second-version schema module that we believe will narrow
> the performance gap to within 1 order of magnitude.
>
> Why accept a 1 order of magnitude slowdown? Because we are
> *confronting* the complexity barrier you mention rather than hiding in
> fear from it. Novamente is very complex, both in its design and in its

Bootstrap requires *more* resources, not less of it. Nonchalance about
losing touch with bare metal in bootstrap design phase sounds very wrong
to me.

> emergent behaviors, but we are working to keep it manageably complex.
> In our judgment, having the system modify schema (Sasha programs) rather
> than C++ source is a big help in keeping the complexity manageable.
> And this added manageability is more than worth an order of magnitude
> slowdown.
>
> Eli and I are fashioning solutions (or trying!!) whereas you are
> pointing out potential problems. There is nothing wrong with pointing

Actually, I'm pretty happy that there so many problems. I would really
hate to see a SysOp made reality, because it would mean a man-made Blight.

> out problems; however, it is a fact that both the nature of the problems
> and the potential workarounds that exist become far clearer once one
> starts working at solving the problems, rather than just talking about
> them.

Um, implementing a runaway AI is certainly not my problem. I'm interested
in modelling biological organisms, which does not involve such dangerous
components. This is the wrong forum to discuss it, however.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT