Will FAI develop a high priority self-preservation goal

From: Gary Miller (garymiller@starband.net)
Date: Mon Dec 16 2002 - 22:29:41 MST


Will a FAI develop a sense of self preservation and self interest? It
seems prudent from an evolutionary perspective to insure an organism
does not engage in risky behavior for no reason thereby risking it's
very existence. Such as radically altering it's own code with out doing
a backup :)

If so, imagine a scenario where a group of Luddites attempt to get
legislation passed that would in fact turn off the AI. The AI would of
course know about this from its daily scan of the internet.
If self preservation was higher on it's list of goals than it's goal to
preserve human life or obey laws made by man then it would be reasonable
to expect it to do everything within it's means to insure it's continued
existence. In this case conducting electronic attacks on it's attackers
via Email, electronic falsification of records that would implicate or
otherwise tie up its attackers. And if it had been replicated into into
robots, cars, houses, etc... and kept communication with it's brothers
then it could have the physical apparatus to enact self-defense as well.

Please don't get the idea that I am against FAI. Quite the contrary, but
I like to play devils advocate in order to insure the proper safeguards
will be in place protecting the FAIs from the fear of the Luddites.
Variations of this topic have of course been done to death in science
fiction but to just say FAIs will be nice, warm and fuzzy all the time
without the architecture in place to keep them that way will leave the
entire field open to harsh cross-examination.
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT