Re: Changing the value system of FAI

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Sun May 07 2006 - 13:33:49 MDT


There are two distinct types of AGIs gone bad. Computronium is an AGI that kills us *accidentally* in the service of some goal. There is also an AGI that *intentionally* gets rid of us because it is worried we may meddle with its engineering. To help avert computronium, the total E=mc^2 usage of a specific engineering technology to be used should be assessed and ranked. An AGI action to write poetry should be subject to few safeguards but an AGI planned intending to harvest antimatter should be screened thoroughly.
   
  I wish I knew better how Deep Blue worked. That is the analogy being presented. Humans as Kasparov making efficient judgement calls and an AGI-Blue brute forcing as much as computer resources permit. I would suggest any AGI action which increases the aggregate person/years of happiness vs. suffering by some threshold (one million person/years?) be implemented immediately if the certainty-of-success is known to be a very high level (99.9999999%?). And the certainty-of-success trigger must increase as our apparent aggregate happiness value increases in time; ie) don't mess with a good thing. The analogy here is not for the AGI to plot the endgame, but to just capture a pawn if ve can get away with it.
  

"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
  Phillip Huggan wrote:
> I guess the program would need to weigh its own available computer
> resources and divide by the potential aggregate # of human-years of
> suffering or happiness that hangs in the balance of the decision, and
> then pick the highest (tractable) pathway solution.

That's known as an "optimal stopping" rule. The problem being, of
course, that computing the exactly optimal time to spend computing a
good action is often more expensive than computing the exactly best action.

                        
---------------------------------
Yahoo! Mail goes everywhere you do. Get it on your phone.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT