Re: Leaving soon

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon May 06 2002 - 06:26:55 MDT


Eugen Leitl wrote:
>
> On Sun, 5 May 2002, Dan Clemmensen wrote:
>
> > In this regard, I think that "friendliness" research and philosophy
> > may be important even if the SI ultimately derives from a
>
> Of what value is friendliness in a setting which is not even engineered?
> Sure, anybody's time it theirs alone to waste.

Too terse, meaning not derivable.

> > human/computer collaboration or as as emergent behavior of
> > co-operating programs rather than from an explicitly-designed AI.
>
> Hard takeoff is intrinsically unfriendly.

Simply untrue. You may think it is obviously unfriendly. It is not
intrinsically unfriendly unless you can demonstrate that it is a logically
necessary property of the category definition irrespective of your beliefs
about subsequent events.

> We can talk about implementing
> friendliness, when you can define it in a watertight fashion.

Does this mean that until you can give a watertight definition of theft, you
don't mind if I pick your pocket? Or do you prefer a commonsense definition
to no definition at all? If so, what is the magical force that grants you
the ability to get along with commonsense definitions while a generally
intelligent AI must use a watertight definition or no definition at all?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT