Milestones for GISAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 05 2001 - 20:36:34 MDT


The milestones for seed AI have nothing to do with Flare. Flare is a more
powerful programming tool. The AI doesn't take its shape from Flare.
Flare is necessary but it is not the be-all and end-all of AI.

Background:
    http://intelligence.org/seedAI/
    http://intelligence.org/GISAI/

The first milestones for seed AI are:

1. The parts work in isolation.
2. The parts all work together to create a single choreographed thought.

When I say "choreographed", I mean that the thought is not spontaneous or
emergent in any way; everyone got together and figured out exactly how it
was going to work, and set up all the machinery in advance so that it
would grind through and create that single thought. The machinery is
still real and is still general machinery - there are no "rigged demo"
points where someone went in and actually wrote special-case code.
However, "choreographed" means that you can pick out in advance what you
want the thought to be, which preprogrammed symbols are put together in
which preprogrammed thought structure, and then proceed to debug the
system as the thought fires - debug the modalities that the symbols fire
into, debug problems in the creation of mental imagery, and so on. In
other words, "choreography" implies that you construct a general mechanism
and then sweat for a while to get it to work in a particular instance.

This is where it gets complicated. The next milestone tracks are:

3. Symbol formation.
4. Symbol structures (thoughts).
5. Thought triggering and deliberation.

The symbol milestones are:

3a. Several preprogrammed symbols exist, created by hand-coding the symbol
content. An example might be creating the symbol "three billiard balls"
by using the retrieval of a memory prototype and a unique correspondence
operation along the lines described in GISAI under "Concepts". This
symbol would be capable of perception, prototype retrieval, and
application.
3b. Once you have a symbol for "three billiard balls", you'd want to work
up to describing "three sets of four billiard balls", cross-modality
symbols, and other ways of perceiving cardinality (for example, counting).
3c. Once you have "three" and "four", try handcoding a symbol for
"number".
3d1. Delete the symbol for "three" and try to get a choreographed symbol
formation for "three billiard balls".
3d2. Get a non-choreographed symbol formation for three billiard balls by
deleting the symbol and placing the AI in a model world where the
formation of that symbol is obvious and useful.
3d3. Get a choreographed symbol formation for "three billiard balls"
followed by a choreographed symbol formation for "three".
3d4. Get a model-world true symbol formation for "three billiard balls"
followed by "three".
3e. Get a true symbol formation of "number".

Assuming 3a, the thought milestones are:

4a. The repertoire is expanded from a single choreographed thought to a
set of several choreographed structures which include some free
variables. That is, thoughts with "a b X d Y f", where X and Y can have
multiple values selected from a store of preprogrammed symbols.
4b. The set of choreographed structures containing variables is large
enough that, by filling in the variables, it is possible to represent
complex sequences of thoughts that sort of resemble plans or descriptions,
albeit plans or descriptions of the "See Spot run" type.
4c. Choreographed larger structures are no longer necessary and local
rules are sufficient; thoughts have the same freedom as, say, grammatical
sentences. Thought is now up and running as new symbols are formed.

Assuming 4a, the deliberation milestones are:

5a. A single choreographed "thought trigger" occurs, in which
choreographed mental imagery undergoes a choreographed binding to a
triggered thought. This means that the cognitive system choreographedly
selects a thought structure, choreographedly fills in the variables with
appropriate symbols, and fires the thought, thus altering the current
mental imagery.
5b. A choreographed thought sequence occurs.
5c. Real sequences of thoughts occur, although the thoughts may be
composed of precreated structures with a few free variables. Deliberation
is now up and running as thought milestones move through 3b, 3c, et
cetera.

Note that further milestones for each of the tracks can be carried out
using only the first milestones of each of the others. The actual
milestones are more likely to run in reverse of the order given here; 5c
before 4c, 4c before 3c, and 3e being extremely difficult.

This all gets you to the first phase of general intelligence.

I've left out memory formation, memory retrieval, the sensory modalities,
distinguishing between factual and subjunctive and goal imagery, and,
well, whole bunches of stuff. I've left out quite a few things that need
to happen before milestones 1 and 2.

My guess is that you can teach an AI to play tic-tac-toe at milestone
levels 5c+4a+3b (optimistic) or 5c+4b+3d2.

All this assumes, grinding away in the background, nondeliberative
heuristics-improving-heuristics a la Eurisko. At stage 5c+4b+3b, you can
start using thoughts in a lot of places where you used to use Flare
codelets. At stage 5c+4b+3c, the thoughts start to be a little
intelligent and a little adaptive. At stage 5c+4c+3c, you can get some
serious intelligence and occasional creativity. At 5c+4c+3e, you move on
to the next phase of the great endeavor, which is fleshing in the general
intelligence and getting the AI to drop from Flare into machine code. (By
the time the AI drops into machine code *or* displays significant
creativity in self-modelling, the AI *must* be *structurally* complete
with respect to Friendliness. Ideally this occurs much sooner, before
general intelligence is structurally complete.)

The next great endeavor after the drop into machine code is essentially
the Singularity. Once the AI has dropped into self-created machine code,
the remaining work begins to look more like teaching and less like
programming. The undirected work at this stage is teaching whatever you
know or whatever the AI asks you for; the directed work consists of
bringing the AI's skills at understanding design, cognition, and for lack
of a better word "philosophy", up to human levels.

So there are, as I currently see it, essentially three stages:

I) The reach for general intelligence. Unintelligent
self-modification.
II) The reach for self-modification that makes use of general
intelligence; self-understanding of design; the drop into machine code.
III) Bringing skills (especially design-related skills) up to human
levels, and/or acquisition of additional hardware computing power, until
Singularity is reached.

The word "hard takeoff" means that the AI has begun climbing a
self-improvement curve which does not require significant additional human
input and which does not halt until human intelligence has been
transcended by a significant margin. The phrase "hard takeoff" does NOT
refer to what happens as soon as the AI starts modifying its own code or
completes stage II or whatever. It *might* turn out to happen that way
but that is *not* the definition of "hard takeoff". By the time the hard
takeoff starts you should be FINISHED with Friendly AI and, if not, you
MUST have finished a controlled ascent subgoal, a controlled ascent
programmatic feature if applicable, and structural completeness in
Friendliness deliberation if at all possible.

There's a lot of overlap between stages I and II; you start reaching for
deliberative self-modification before you've achieved structural
completeness for deliberation.

Stage III will probably go by more quickly than stages I and II, even if
it should involve more human input in an absolute sense, because it can be
very high-level input rather than programming.

>From a true-conservative perspective, it should be assumed that real
humanstyle intelligence is achieved during stage III as the result of a
heck of a lot of work. From a Friendly AI perspective, it should be
assumed that a hard takeoff could conceivably begin during stage II or
stage I. From a free perspective it seems likely that a hard takeoff will
occur after humanstyle intelligence is achieved but significantly before
human-*equivalent* intelligence is achieved.

You can get structurally complete Friendly AI during stage I, but it won't
have much real content. (You can do a real, grounded controlled ascent
subgoal, though.) Most of the technical stuff in "Creating Friendly AI"
can be done in a fairly early AI system if you're going for structure
instead of real content.

I know how to do a lot of things that should be *sufficient* for a hard
takeoff. The problem is that I don't know if they're *necessary*. AIs
have an awful lot of advantages humans lack. Just because humans do X,
and general intelligence is impossible without X, and I know how to build
X, and I haven't done X yet, it doesn't mean I can safely say "A hard
takeoff won't happen until we do X".

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT