Re: Leaving soon

From: Eugen Leitl (eugen@leitl.org)
Date: Mon May 06 2002 - 07:22:38 MDT


On Mon, 6 May 2002, Eliezer S. Yudkowsky wrote:

> > Hard takeoff is intrinsically unfriendly.
>
> Simply untrue. You may think it is obviously unfriendly. It is not
> intrinsically unfriendly unless you can demonstrate that it is a
> logically necessary property of the category definition irrespective
> of your beliefs about subsequent events.

Easy enough: the overwhelming majority of hard takeoff Singularity
trajectories are unfriendly, since leaving humanity no time to even react
not to mention adapt. It is up to you to prove that a specific set of
friendly trajectories *exist*, and that you can *encode* their homeostatic
envelope in a seed a priori, also a priori asserting in the course of
evolution the system can never escape its confines (a computationally
undecidedable task).
 
> > We can talk about implementing
> > friendliness, when you can define it in a watertight fashion.
>
> Does this mean that until you can give a watertight definition of theft, you
> don't mind if I pick your pocket? Or do you prefer a commonsense definition

Stawman. It's you who's striving to bring a thief into our midst. We're
not living in a hard takeoff Singularity right now.

> to no definition at all? If so, what is the magical force that grants you
> the ability to get along with commonsense definitions while a generally
> intelligent AI must use a watertight definition or no definition at all?

Problem with you if you goof up with the friendliness definition you can't
maintain the system evolution trajectory within the friendliness envelope
(since it being the restoring force), which results in Singularity turning
malignant (since all the rest of the hard takeoff space is pure Blight
from the merely human receiving end).



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT