From: Ben Goertzel (firstname.lastname@example.org)
Date: Wed Feb 27 2002 - 08:36:08 MST
Of course, Eli is right.
First of all, "robust (i.e. not dying an instant death) modification of
its own code base" is not really the goal. Tierra demonstrates that.
A Tierra organism modifies its own code. Sure, this code is in a peculiar
high-level language, but so what? The goal is self-modification that is
purposefully oriented toward improved general intelligence. A rather
loftier goal than non-death-causing self-modification, and one that no
system has yet achieved.
But obviously, "self-modification that is purposefully oriented toward
general intelligence" is not a viable *first milestone*. Rather, it's an
milestone where N is probably in the range 3-20, depending on one's
Out of all the paths by which one *could* work toward the goal of
that is purposefully oriented toward improved general intelligence", one can
A) paths that begin with unintelligent self-modification
B) paths that begin with purposeful intelligent non-self-modifying behavior
C) paths that begin with a mixture of self-modification and purposeful
Eli and I, at this point, seem to share the intuition that B is the right
I have been clear on this for a while, but Eli's recent e-mail is the first
I've heard him clearly agree with me on this.
Eugene, if your intuition is A, that's fine. In this case something like
(which demonstrates robust self-modification, not leading to instant death)
be viewed as a step toward seed AI. However, the case of Tierra is a mild
counterargument to the A route, because its robust self-modification seems
inadequately generative -- i.e. like all other Alife systems so far, it
a certain amount of complexity and then develops no further.
-- Ben G
> -----Original Message-----
> From: email@example.com [mailto:firstname.lastname@example.org]On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Wednesday, February 27, 2002 6:48 AM
> To: email@example.com
> Subject: Seed AI milestones (was: Microsoft aflare)
> Eugene Leitl wrote:
> > Your first fielded alpha must demonstrate robust (i.e. not dying an
> > instant death) modification of its own code base as a first milestone.
> Uh, not true. A seed AI is fundamentally built around general
> with self-improvement an application of that intelligence. It
> may also use
> various functions and applications of high-level intelligence as low level
> glue, which is an application closed to humans, but that doesn't
> imply robust modification of the low-level code base; it need only imply
> robust modification of any of the cognitive structures that would
> be modified by a brainware system.
> The milestones for general intelligence and for self-modification are
> independent tracks - though, of course, not at all independent in
> any actual
> sense - and my current take is that the first few GI milestones are likely
> to be achieved before the first code-understanding milestone.
> It's possible, though, that I may have misunderstood your meaning, since I
> don't know what you meant by "first fielded alpha". You don't "field" a
> seed AI, you tend its quiet growth.
> -- -- -- -- --
> Eliezer S. Yudkowsky http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:21 MDT