Re: Definition of strong recursive self-improvement

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sat Jan 01 2005 - 07:34:01 MST


On Sat, 1 Jan 2005 02:41:55 -0500, Randall Randall
<randall@randallsquared.com> wrote:
> I think I understand the question being asked here,
> and I think it's an important one, so let me try to
> ask it or a related one in a different way:
>
> When you write code, you simulate, on some level,
> what the code is doing in order to determine whether
> the algorithm you're writing will do what you intend.
> However, no amount of simulation will allow you to
> design an algorithm that is more intelligent than
> you are, since it must be executable within your
> current algorithm.

It's even considerably worse than that.

Simulation can only show what an algorithm does with a particular
input, but in general the number of possible inputs will be infinite,
or at least exponentially large, and the question "is B more
intelligent than A?" is a question about B's behavior versus A's
across the set of all _possible_ inputs.

Worse still, the real world can't be exactly simulated. It can't even
be very accurately simulated with foreseeable computing power
(including nanocomputers). If you're trying to create/improve an AI
that can perform real-world tasks such as designing jet engines or
proteins, flying a remote vehicle or processing signals in real time,
there's no way to simulate how it will perform in _one_ situation, let
alone across the set of all possible situations.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT