Re: Self-modifying FAI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 26 2002 - 07:06:51 MDT


Eugen Leitl wrote:
>
> Coupling, as interacting. A runaway Singularity machine operating down
> here is not a colibri. I understand the whole point of building an
> ethics-aware Power is not that it tiptoes out of everybody's way. Orelse
> why build it?
>
> Warping as changing the course of evolution. I've described why this must
> occur regardless of whether moral evaluation is completely externalized,
> or completely internal. The Power asserts the delta between world's and
> own metric is small at each iteration, but this doesn't constrain longterm
> drift. Since it's not a passive player, it drives the drift. Longterm
> results are undecidedable.
>
> What I meant with the above is that it doesn't matter. An iron fist in a
> velvet glove is still an iron fist. All the problems are intrinsic to the
> power gradient, which the system must maintain to keep ahead of subject
> rebellion.

Eugen, please use smaller words or define the terms above. Or as you would
put it:

"Sentences too terse. The explanation creates imagery but does not
constrain it. Binding to the model in the original agent is possible but
not guaranteed; internal forces may supervene, causing the overall course of
system development to be internally reflective rather than externally
indexive. A finer-grained communications model is needed as a fix."

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT