Re: Pete & Passive AI

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Thu Dec 08 2005 - 10:43:16 MST


There is no reason the actions required for effective thinking cannot be filtered through humans. To use Pete's analogy: an AGI wants satellite data. It asks for infrared photos at xy coordinate in 1/2 an hr. We say, alright sure here you go.
  The actual photo snapshots do not require thought. There are two very different grave dangers an AGI breeds. There is the danger ver actions actively kill us off; the tiling of the universe with computronium scenario. But there is also the danger that the ramifications of ver actions, when achieved, kills us off as a side-effect. The two dangers are not the same thing. The former has been discussed in the AI-box thread. The conclusion reached being that an extensive appreciation of GUTs/TOEs would be required to determine if the subset of physics an AGI has access to inside the box was sufficient for it to escape. Many people concluded that AGI magic (escaping) was likely.
  But if almost all RPOP are grave to humanity, for example if asking an AGI to consider human intentions if we were smarter more moral, results in the AGI killing us off and tiling earth with "better" humans... these types of dangers would be completely eliminated if ve could be constrainted to simply offer suggestions: PAI. PAI activity constrained by human actors might take months or years to eliminate (most of) the imminent threats our species faces compared to the days or weeks FAI would require, and PAI's odds of success would certainly not be quite as high as FAI's. But if PAI spit out a course of action like "okay, now you have to let me online, and then all kill yourselves", we could blast ver servers with a shotgun. If FAI fails because of some fluke error in the philosophy or logic of the goal system we give ver, we are steamrolled. If PAI fails the same way, we have a chance for human intervention before we finish building the steamroller ve is giving us the
 blueprints to.

  David Picon Alvarez <eleuteri@myrealbox.com> wrote:
  Materially speaking, the problem with such an approach you propose is that
any sufficiently advance process of thinking requires action. Even theorem
solvers require a strategy module to decide what paths to follow. Thinking
is just a subset of acting, and from the viewpoint of an AI where everything
is data not a particularly useful subset to consider.
  

                        
---------------------------------
Yahoo! Shopping
 Find Great Deals on Holiday Gifts at Yahoo! Shopping



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT