From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Tue Feb 03 2004 - 06:59:24 MST
> Managing a local-oriented, self-focused value system is EASIER
> computationally than managing a universe-focused, unselfish value
> system. But with the vast computing power AGI's will have in the
> future, the latter may also be manageable.
I think humans have enough computational power to manage a
universe-focused, unselfish value system - we just have (probably) a bit
of hard wiring and a lot of cultural conditioning in the way. So I have
no doubt that AGIs will be able to manage such an ethic too if they start
to approximate or exceed higher primate/cetacian/elephant etc.
What about starting AGIs (even when they have relatively little grunt)
with a variation of universe-focused, unselfish value system - but
configure it so that it is not too computationally demanding - then as the
computational power grows the settings of the ethical system could be
tweaked to take more and more into account about the 'other'? This
seems better to me than starting with a local-oriented, self-focused
value system and then having to manage a transition to a universe-
focused, unselfish value system at some later date.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT