RE: Adaptation brings unFriendliness

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Mon Nov 27 2006 - 13:54:40 MST


Phil,

Are you saying that you don't think that a superintelligence working
toward *some* goal would derive self-preservation as an important
instrumental goal? It's hard to move decisively toward any goal if you
don't exist.

Might it further and more to the point, be the case that deriving such
an instrumental goal is a requirement of stable recursive
self-improvement? I find it unlikely that a superintelligence would
hold an anthropomorphic sense of self, and while it may have some
conception of it's internal systemic integrity, I find it even more
unlikely that it would apply self-preservation controls internally, but
fail to generalize such controls to its environment.

In other words, for a bootstrapping superintelligence to avoid
destroying itself in the process, I think it must implement an
instrumental goal of internal self-preservation. I don't see any clear
way to divorce this from external self-preservation, as many internal
resource and integrity concerns are inextricably bound to external
conditions.

-Chris

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Philip Goetz
> Sent: Monday, November 27, 2006 2:20 PM
> To: sl4@sl4.org
> Subject: Re: Adaptation brings unFriendliness
>
>
> I don't think that a drive for self-preservation is necessary
> to reach superintelligence. I do think that once there are
> numerous adaptive superintelligences in existence, those with
> a drive to expand and consume more resources will get more
> resources than, and eventually supplant, those without it.
> It may be that the drive will be that of humans controlling
> the superintelligences rather than of the intelligences
> themselves, but I don't see that makes much difference.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT