RE: Seed AI (was: How hard a Singularity?)

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 23 2002 - 08:37:58 MDT


Eliezer wrote:
> Perhaps. Nonetheless there is more Cycish stuff in Novamente than I am
> comfortable with. Novamente does contain structures along the lines of
> eat(cat, mice). I realize you insist that these are not the only
> structures
> and that the eventual real version of Novamente will replace these small
> structures with big emergent structures (that nonetheless follow
> roughly the
> same rules as the small structures and can be translated into them for
> faster processing).

The current Novamente version deals only with numerical inputs, and contains
no structures like

        eat(cat, mice)

The first version that deals with human language input, may or may not
create structures like this internally on its own...

Such examples are used in the current documentation on the system, because
they're easy to write about. In the revised documentation they are relied
upon less because they proved misleading to you and some other readers.

> I guess what I'm trying to say is we have different
> ideas about how much of mind is implemented by content, the sort
> of stuff we
> humans would regard as *transferable* knowledge - the kind of
> knowledge that
> we communicate through books. I think you ascribe more mind to
> transferable
> content than I. I am not saying that you ascribe all mind to
> transferable
> content, but definitely more than I do (and less than Cyc).

I'd like to divide the "Transferable content" category in two

1) content that is explicitly transmitted through books, DB's, etc.

2) content that is *implicitly* transmitted through interacting with other
minds in a shared environment

2 is at least as important as 1.

Cyc tries to capture a lot of what is implicitly learned by humans, in
explicit form, but I am very skeptical about this.

> A tremendous part of an AI is brainware. The most important
> content - maybe
> not the most content, but the most important content - is content that
> describes things that only AIs and AI programmers have names for,
> and much
> of it will be realtime skills that humans can help learn but
> which have no
> analogue in humans.

In my view, the most important content of an AGI mind will be things that
neither the AI or its programmers can name, at first. Namely: *abstract
thought-patterns, ways of organizing ideas and ways of approaching
problems*, which we humans use but know only implicitly, and which we will
be able to transmit to AI minds implicitly through interaction in
appropriate shared environments..

> You think my design is too complex.

Actually I don't have much idea of what your AI design is, if indeed you
have one!

> Okay. Nonetheless, the more
> complex a
> design is, the more mind arises from the stuff that implements
> that design,
> and the more opportunities there are to improve mind by improving the
> implementation (never mind actually *improving the design*!) I
> think that
> the more specific a model one has of mind, the more ways you'll
> be able to
> think of improving it.

Of course, this general statement is not true. Often, in software
engineering and other kinds of engineering, a very complex design is HARDER
to improve than a simple one.

> it is
> knowledge about
> how to think, and an AI will think differently from humans.

It will think differently from humans, but in the early stages, it will
learn a lot of what it knows about "how to think" from humans.

> Humans in
> general (as opposed to successful AI researchers) have very
> little knowledge
> of this kind, and what there is will be mostly untransferable
> because of the
> difference in cognitive systems.

I think that a lot of transfer of thought-patterns will happen *implicitly*
through interaction in shared environments.

For this to happen, explicit declarative knowledge of thought patterns is
not required, on the part of the human teachers.

> I think explicit education by humans will be an important part of
> bootstrapping an AI to the level of being able to solve its own problems.
> By the time human knowledge is even comprehensible to the AI, most of the
> hard problems will have already been solved and the AI will
> probably be in
> the middle of a hard takeoff.

I doubt this is how things will go. I think human knowledge will be
comprehensible by an AI *well before* the AI is capable of drastically
modifying its own sourcecode in the interest of vastly increased
intelligence.

> > And I agree with that -- the question is *how fast* will the AI
> be able to
> > improve itself.
> >
> > It's a quantitative question. Your intuitive estimate is much
> faster than
> > mine...
>
> Ben, you're the one who insists that everything is "intuition".
> I am happy
> to describe your estimates as "intuitions" if you wish, but I think that
> more detailed thoughts are both possible and desirable.

I don't insist that everything is intuition. I try to carefully distinguish
between conclusions based on evidence, and hypotheses based on intuition.
Both in my own thinking and in the thinking of others.

(Of course, intuitions are at bottom based on evidence, but they integrate
large bodies of evidence in complex, hard-to-trace ways.)

> You seem to think that you create a general intelligence with all basic
> dynamics in place, thereby creating a baby, which then educates
> itself up to
> to human-adult-level intelligence, which can be done by studying
> signals of
> the kind which human adults use to communicate with each other.
> I don't see
> this as likely. The process of going from baby to adult is likely to be
> around half brainware improvement and half the accumulation of knowledge
> that cannot be downloaded off the Internet. The most the corpus of human
> knowledge can do is provide various little blackbox puzzles to be solved,
> and most of those puzzles won't be the kind the AI needs to grow.

Yes, we differ here. I think we will create an AGI with all basic dynamics
in place, thus creating a baby, which will educate itself up to
human-adult-level-intelligence, partly by interaction with adult humans in
shared environments.

I also think that, during this education process, we will discover flaws in
the AGI's "basic dynamics", so that engineering will be ongoing during the
teaching period. As teaching progresses, the AGI itself will be more and
more useful in helping improve its own dynamics (and structures).

> Okay, now *you're* misinterpreting *me*. I don't think that AGI can be
> bootstrapped to through seed AI, nor that human interaction is
> unimportant.
> Humans are a seed AI's foundations of order. Humans will teach
> the AI but
> what they will teach is not the corpus of human declarative
> knowledge. What
> they teach will be domain problems that are at the right level
> the AI needs
> to grow, and what the AI will learn will be how to think.

I think that humans will teach the AGI more than just "domain problems at
the right level," I think that by cooperatively solving problems together
with the AGI, humans will teach it a network of interrelated
thought-patterns. Just as we learn from other humans via interacting with
them.

> > You may say that with good enough learning methods, no teaching is
> > necessary.
>
> Incorrect. What I am saying is that what is taught will not be
> the corpus
> of human declarative knowledge, nor would trying to teach that
> corpus prove
> very useful.

The corpus of human declarative knowledge is useful to an AGI for two
reasons

a) directly, it's valuable knowledge

b) it gives a context in which far MORE valuable abstract thought-patterns
can be transmitted from humans to the AGI

> > Maybe so. I know you think Novamente's learning methods are too
> > weak, though you have not explained why to me in detail, nor have you
> > proposed any concrete alternatives. However, I think that *culture and
> > social interaction* help us humans to grow from babies into mature adult
> > minds in spite of the weaknesses of our learning methods,
>
> Because humans have evolved to rely on culture and social
> interaction does
> not mean that an AI must do so.

I agree, it does not mean that an AI *must* do so. However, I hypothesize
that to allow an AI to learn its initial thought-patterns from humans based
on experiential interaction, is

a) the fastest way to get to an AGI

b) the best way to get an AGI that has a basic empathy for humans

> From an AI's-eye-view, the "humans" are
> external blackbox objects that pose problems which, when the AI
> solves them,
> turns out to lead to the acquisition of reusable reflective skills. (At
> least, that's what happens if the humans are doing it right.)

I think this is

a) a less efficient way to train a mind than cooperating with it in a shared
environment [due to the fact that the latter allows more
abstract-thought-pattern transfer]

b) a route much less likely to lead to a Friendly AI. An AGI that has
learned to approach context and problems by doing so cooperatively with
humans, and has hence absorbed human thought-patterns, is a lot more likely
to have a real empathy for humans as it reaches the transcension phase.

> > and I think that
> > these same things can probably help a baby AGI to grow from a piece of
> > software into a mature AGI capable of directing its activities
> in a useful
> > way and solving hard problems.
>
> I don't think the software of a baby AGI will much resemble the
> software of
> a mature AGI, and I say "AGI", not "seed AI".

Yes, you see more "code self-modification" occurring at the
"pre-human-level-AI" phase than I do.

This is because I see "intelligent goal-directed code self-modification" as
being a very hard problem, harder than mastering human language, for
example.

> > And if
> > we do get it started with our teaching & our knowledge, then when it
> > outstrips us, it will face a new set of challenges. I'm sure it will be
> > able to meet these challenges, but how fast? I don't know, and
> neither do
> > you!
>
> And this "I don't know" is used as an argument for it happening at
> humanscale speeds, or in a volume of uncertainty centered on
> humanscale speeds?

I don't have a strong argument that the transition from human-level to
vastly superhuman level intelligence will take years rather than weeks.

I consider "humanscale speed" as a likely upper bound (though not a definite
upper bound).

I don't feel you have a strong argument that the transition will be vastly
faster than this upper bound suggests, though.

Your argument was that "there's nothing special about human level
intelligence." I sought to refute that argument by pointing out that, to
the extent an AGI is taught by humans, there is something special about
human level intelligence after all. Then you countered that, in your
envisioned approach to AI, teaching by humans plays a smaller role than in
my own envisioned approach. And indeed, this suggests that if seed AI were
achieved first by your approach rather than mine, the gap between human
level and vastly superhuman level intelligence would be less.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT