Re: moral symmetry

From: Gordon Worley (
Date: Tue Dec 31 2002 - 19:47:43 MST

On Tuesday, December 31, 2002, at 07:07 PM, Wei Dai wrote:

> More seriously, the Friendly AI would have to divide up the universe
> somehow. I really don't see a method to do it that does not seem
> arbitrary
> or absurd in some way. It would be a treat just to see what it actually
> decides upon, and why.

This seems to be the key, here. What's listed in CFAI is probably
*Eliezer's* best idea of what an FAI might do in this situation.
Although Eliezer's ideas are probably closer to what might actual
happen than, say, George Bush's ideas, they are still a far cry from an
accurate portrayal of the future. As such, certainly you have a point,
but the point is that Eliezer's idea has a hole (and I take it you
don't see how to patch the hole either, but it's still good to point it
out). Maybe there's no way to do it and there's nothing immoral about
personal resource differences. Maybe minds won't even own personal
resources as they do today, possibly making the point moot (e.g. the
Singularity is a big time-share system, like Unix).

At any rate, it's an interesting question, but, as you seem to
understand, it's not really a decision we get to make; it's a decision
our creation gets to make (so we need to make it a good, moral one).

Gordon Worley                      "It requires a very unusual mind               to undertake the analysis of                           the obvious."
PGP:  0xBBD3B003                        --Alfred North Whitehead

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT