RE: Universalising an AGI's duty of care

From: H C (lphege@hotmail.com)
Date: Mon Jul 18 2005 - 10:10:48 MDT


Basically what you are reffering to with Golden Rule #1 and #2 is, assuming
a sucessful cleanly Friendly AGI (with Friendliness external refernce
semantics), a speculation on the Friendliness content of the AI's
"supergoal" of Friendliness.

Ultimately since it's Friendliness content is going to be orinating upon the
basis of human desires, I still argue that neither humans nor the FAI will
want to arbitrarily infringe upon the rights of the alien race unless it
becomes necessary.

And if you don't assume a 'cleanly' FAI... well then we are probably
screwed, let alone the aliens.

-- Th3Hegem0n

>From: "Philip Sutton" <Philip.Sutton@green-innovations.asn.au>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: RE: Universalising an AGI's duty of care
>Date: Tue, 19 Jul 2005 01:44:46 +1000
>
>Ben,
>
> > when we propose abstract goals like "freedom, joy and growth" ........
> > or whatever, we are using terms that are not precisely defined -- and
> > that are...defined only in terms of the whole human culture and
> > human psychology.
> > ......
> > And if we're going to recommend to an alien civilization that it adopt
> > some goals we make up, we should remember that the alien civilization
>may
> > not actually have terms or concepts like our "freedom" , "joy",
>"growth",
> > "peace", etc. In order to explain what our goals mean to the aliens,
> > we'll need to steep them in the wonderful peculiarities of human culture
> > and human individual and collective psychology.
>
>I think this helps to illustrate my point that taking the perspective of
>'advising
>another galaxy" is valuable is sharpening up the needs and dilemmas.
>
>Just as sentient beings in another galaxy might have goals or aspirations
>that we don't share, so not all humans share the same goals and neither
>most likely will the AGIs they create.
>
>So it seems very likely to me that AGIs everywhere throughout the universe
>will have non-uniform initial goal sets.
>
>But the issue of friendliness or at least tolerance still remains. If a
>flock of
>AGIs from various galaxies were to decend on Earth what
>friendliness/tolerance codes would we like these AGIs to adhere to - in
>relation to us and the other living things on Earth? If we are to be
>ethical in
>our dealings with sentients in other galaxies what friendliness/tolerance
>codes should we build into the AGIs that we create?
>
>I don't think it's a coincidence that golden rules are common in the most
>widespread human philosophies eg. "do unto other as you would have
>them do unto you" (golden rule 1) or "don't do unto others what they would
>have you not do unto them" (golden rule 2). Things seem to work
>reasonably well when diverse cultures have contacts that are governed by
>these sorts of rules. I imagine it wouldn't be impossible to implement
>these
>ideas in any galaxy where there are sentients clever enough to create AGIs
>exist. Golden rule 1 can guide action even if you know nothing about the
>other sentients you are contacting (which means it is not a fail safe
>rule).
>Golden Rule 2 means you need to build understanding of any sentients you
>contact before taking action that could have a potential to violate the
>rule.
>Golden rule 2 would require an attitude of forberance, and patience and of
>careful learning.
>
>By the way, the goldern rules were invented to inject a bit of friendliness
>into
>natural general intelligences where contact between non-kin or out-groups
>occurred - especially in larger rural and early urban communities.
>
>My guess is that AGIs on earth that adhered to both these rules would have
>at least a basic level of friendliness.
>
>Cheers, Philip
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT