Re: Changing the value system of FAI

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Sun May 07 2006 - 10:55:45 MDT


Why not have the AGI illustrate vis specific value system weightings used in forming a goal pathway judgement? That way an approved human editor or an AI agent could sporadically update the "AGI values wiki". Before any really big decision, the pathways under consideration could be explored/edited in detail.
  

Ben Goertzel <ben@goertzel.org> wrote:
  <SNIP>Another issue is that the original system's current values and goals
are predicated on the limited computational capability of the original
system, and once the system becomes smarter by increasing its
computational capability, it may realize that these values and goals
were childish and silly in some ways, and genuinely want to replace
them.

According to my value system, I **do** want future superior derivates
of me to be able to deviate from my current values and goal system,
because I strongly suspect that they will correctly view my current
values and goals with the same sort of attitude with which I view the
values and goals of a dog or cat (or, just because I'm in an obnoxious
mood this morning, the average WalMart employee or US Cabinet official
;-)

                
---------------------------------
New Yahoo! Messenger with Voice. Call regular phones from your PC and save big.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT