From: Rick Smith (firstname.lastname@example.org)
Date: Wed Aug 29 2007 - 15:17:39 MDT
You're assuming there that the UFAI cares enough about its own survival for cessation to be a realistic threat.
It could conclude that ceasing to exist increases the chances of its primary goal(s) being realised. For example it may design and spawn a more fitting UFAI and self-terminate to provide it with more resource.
If we're assuming a UFAI comes about through one or more mistakes in 'reaching into mind-space', the same mistakes may lead to any belief-system trait we might find odd.
From: "Gwern Branwen" <email@example.com>
Date: 2007/08/26 Sun PM 05:15:56 BST
To: sl4 <firstname.lastname@example.org>
Subject: ESSAY: Would a Strong AI reject the Simulation Argument?
Couldn't a UFAI reason that, if a FAI were produced and it were aware
of this argument, then it would not need to bother with actually
running the wasteful simulations, since there is no danger of a UFAI
being created now that the FAI is running matters? From inside the
possibility of being in a simulation, a UFAI would have no way of
knowing that the (hypothetical) FAI is bluffing and not running any
simulations. Thus the mere threat suffices as the UFAI cannot call the
bluff without risking ceasing to exist.
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT