Re: A position

From: Jimmy Wales (
Date: Tue May 22 2001 - 15:51:34 MDT

Eliezer S. Yudkowsky wrote:
> Then I decided that, since whether morality is *ultimately*
> arbitrary is a hidden variable, it makes sense to plan for both
> cases. Then I decided that since all known morality is known to be
> ultimately arbitrary, this should be treated as the default case, at
> which point I'd switched to Friendly AI theory.

I disagree very strongly with both of these claims. It is simply not true
that "all known morality is known to be ultimately arbitrary". That's a
much stroner claim that you can validly make.

> Game-theoretical altruism only operates between game-theoretical equals.
> I'm not saying that you can't have altruism between nonequals, just that
> there is no known logic that forces this as a strict subgoal of
> self-valuation.

I can't think of any good reason to desire altruism at all!

> I regret to inform you that your child has already been genetically
> preprogrammed with a wide variety of goals and an entire set of goal
> semantics. Some of them are nice, some of them are not, but all of them
> were hot stuff fifty thousand years ago. Fortunately, she contains
> sufficient base material that a surface belief in rationality and altruism
> will allow her to converge to near-perfect rationality and altruism with
> increasing intelligence.

Not altruism. I don't think you are using that word correctly.


*              *
*      The Ever Expanding Free Encyclopedia     *

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT