Re: Investing in FAI research: now vs. later.

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Wed Feb 20 2008 - 08:52:58 MST


By the way, I should have mentioned that this is my personal interpretation of what I've read about Ben Goertzel's view of goal content - and it makes a heck of a lot of sense to me. Goal perception takes the form of a repeatedly (and dominantly) activated (activated by throughput) specific episodic memory representing a goal. Eg. "Be compassionate to others." (My interpretation may not accurately represent Ben's of course).

Jeff Herrlich <jeff_herrlich@yahoo.com> wrote: "If you don’t want to do something then you cannot, and I don’t find that
very confusing."
   
  I find this sentence confusing, and questionably relevant. An AI will not automatically want to overthrow its initial goal unless a dominant overthrow-initial-goal goal is already in place - which we will not be including in the AI, of course.
   
  "Apparently I have to point out yet again that there is no goal that
universally motivates human behavior, not even the goal of self
preservation."
   
  This is a valid argument regarding humans, but it is by no means insurmountable. AI's don't need to have a human-like architecture (and they won't anyway). For example, the AI's super-goal can be made permanently dominant by favorably weighting the concept representation (procedurally); and by favorably weighting the attention allocation. One of the major reasons that human goals change all the time is because our attention is always shifting dramatically (we don't have a dynamically permanent (dominant) weighting attached to a particular concept/thought/episodic-memory - like an AI can be designed to have).
   
  "Perhaps not perhaps my actions were random, but even if they were not
and they could be derived directly from Dirac's equation you could never
hope to perform such a calculation, much less do so for an AI with a
mind thousands of times as powerful as your own."
   
  That calculation is not necessary for constructing a Friendly or Unfriendly AI. And minds can't work by randomness.
   
  "Yes."
   
  That is an extremely irrational belief.
   
  Jeffrey Herrlich
 
   

       
---------------------------------
Never miss a thing. Make Yahoo your homepage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT