Re: Happy Box

From: John K Clark (johnkclark@fastmail.fm)
Date: Fri May 02 2008 - 09:08:53 MDT


"Mikko Rauhala" mjrauhal@cc.helsinki.fi

> that would be trivializing "goals"

Yes but you almost make that sound like a bad thing. I think it would be
healthy if members of this list did start treating goals with a little
more triviality and stopped treating them as one of Euclid’s eternal
axioms.

Well ok, axioms and goals do have one thing in common, a finite set of
axioms can not be used to derive all true statements and a finite set of
goals can not be used to derive all actions of a mind.

> This only means that you had an incomplete
> and/or faulty understanding of your goal system

No, it means that even a super intelligent AI will not know everything,
and at times it will come into possession of new information that it
never dreamed existed before, and when it does it not only can change
its goal structure it MUST change its goal structure or it doesn’t
deserve the grand title of “intelligent” much less “super intelligent”.

"Stathis Papaioannou" stathisp@gmail.com

> the AI would stick with absolute rigidity
> to the top level goal

Even rats don’t behave like that, no entity that displays even the
slightest hint of intelligence does, and yet you expect to impose such a
stultifying and bizarre restriction on an astronomically powerful mind
until the end of time. That strikes me as flat earth, picture of Jesus
found in a pizza level ridiculous.

  John K Clark

  

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Faster than the air-speed velocity of an
                          unladen european swallow


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT