Re: Definition of strong recursive self-improvement

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sat Jan 01 2005 - 17:25:20 MST


On Sat, 1 Jan 2005 13:09:53 -0800, Samantha Atkins <sjatkins@gmail.com> wrote:
> I am not sure I see the difficulty. If one has ways of measuring
> correctness/degree of fit to a goal of results given problem context
> and the efficiency with which a nominally correct solution was arrived
> at and means for tweaking the mechanisms employed to reach the
> solution then even something like a GA is in principle capable of
> generating progressive improvments to the system.

In principle at least, yes; a GA produced one full general
intelligence system already, after all, and perhaps it could do so
again.

However, then you're back to having a population of entities capable
of self-replication, whose fitness is tested by measuring their
performance in the real world; this is quite different from the "AI in
a basement" scenario that I understand Eliezer to be counting on.

> I don't see that recursive self-improvment requires that the
> [super]goal itself is changing. So what is the problem?

Well, it complicates things considerably from the perspective of
having to keep an eye on the population of candidate improved AIs to
make sure what they're doing in the real world is actually
contributing to your supergoal (rather than, say, the de facto
supergoal of just making more copies of themselves). It's not a
"forget it, Friendly AI is impossible" scenario, but it's very much
more complicated than the "hard takeoff in a basement" one - which is
why "strongly recursive self-improvement" as I understand Eliezer to
mean it, would be a nicer/safer way to go if it were possible.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT