Re: AGI Reproduction?

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Fri Feb 03 2006 - 13:12:25 MST


You're assuming an observer-centric goal system (and no, that still
wouldn't help us - why would it?).
   
  Hi Peter,
   
  If there is only one non-friendly AGI that values its own life (goal-satisfaction) above all others, we will all certainly be killed once it acquires the means to do so. If multiple, comparably powerful AGIs are created (using the original human-coded software). They will each value there own survival above all others. Under these situations, it may be less likely that one AGI would attack another AGI. By virtue of this, it may be less likely that an AGI would attempt to exterminate humanity simply because humanity might still serve as a valuable resource, at least for a while. Or, it may decide to restructure its own goal system in a way that did not include human extermination. I didn't say it would be pretty, I only said this would improve the chances of (at least some) humans surviving, in one form or another (uploads?)
   
  Jeff
   
   
  Peter de Blanc <peter.deblanc@verizon.net> wrote:
  On Fri, 2006-02-03 at 08:42 -0800, Jeff Herrlich wrote:
> As a fallback strategy, the first *apparently* friendly AGI
> should be duplicated as quickly as possible. Although the first AGI
> may appear friendly or benign, it may not actually be so (obviously),
> and may be patiently waiting until adequate power and control have
> been acquired. If it is not friendly and is concerned only with its
> own survival, the existence of other comparably powerful AGIs could
> somewhat alter the strategic field in favor of the survival of at
> least some humans.

You're assuming an observer-centric goal system (and no, that still
wouldn't help us - why would it?).

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT