Re: A position

From: Gordon Worley (redbird@rbisland.cx)
Date: Tue May 22 2001 - 22:24:06 MDT


At 4:51 PM -0500 5/22/01, Jimmy Wales wrote:
> > Game-theoretical altruism only operates between game-theoretical equals.
>> I'm not saying that you can't have altruism between nonequals, just that
>> there is no known logic that forces this as a strict subgoal of
>> self-valuation.
>
>I can't think of any good reason to desire altruism at all!
>
>> I regret to inform you that your child has already been genetically
>> preprogrammed with a wide variety of goals and an entire set of goal
>> semantics. Some of them are nice, some of them are not, but all of them
>> were hot stuff fifty thousand years ago. Fortunately, she contains
>> sufficient base material that a surface belief in rationality and altruism
>> will allow her to converge to near-perfect rationality and altruism with
>> increasing intelligence.
>
>Not altruism. I don't think you are using that word correctly.

I've fussed about this before, but altruism is the closest word we
have. First off, don't use the Ayn Rand definition; it has very
little to do with what Friendlyness is. If I've got it right, when
Eliezer is using altruism, he something that has no relation to the
self (remember, unanthropomorphic thought) but to respecting the
wishes of others. If someone asks you not to hit ver in the nose,
you don't. On the other hand, if someone asks you to feed ver a
sandwich, you try to so long as it doesn't seem to violate any of
your higher goals (e.g. if the act will violate Friedliness, the AI
won't do it). It's much more complex than the simple deffinitions
that are usually applied. Now, I could have gotten this wrong, but
this is what I have come to think that Eliezer is meaning. If I'm
off, please correct me.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT