RE: ethics, joyous growth, etc.

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Feb 03 2004 - 07:02:19 MST


> I think humans have enough computational power to manage a
universe-focused, unselfish value system - we just have (probably) a bit of
hard wiring and a lot of cultural conditioning in the way.

   I suppose so... but it's also true that when we try to be altruistic, we
run into a lot of difficult paradoxes and puzzles ... which makes it easy to
fall back on our selfish hard-wiring...

> What about starting AGIs (even when they have relatively little grunt)
with a variation of universe-focused, unselfish value system - but configure
it so that it is not too computationally demanding - then as the
computational power grows the settings of the ethical system could be
tweaked to take more and more into account about the 'other'? This seems
better to me than starting with a local-oriented, self-focused value system
and then having to manage a transition to a universe- focused, unselfish
value system at some later date.

  I agree..

  Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT