Re: superintelligence and ethical egoism

From: Gordon Worley (redbird@rbisland.cx)
Date: Sun Jun 03 2001 - 07:19:40 MDT


At 2:36 AM -0400 6/3/01, Eliezer S. Yudkowsky wrote:
>Sigh... no definition offered for egoism,

Sorry, I just assumed there was no need to repeat the dictionary:

1 a : a doctrine that individual self-interest is the actual motive
of all conscious action b : a doctrine that individual self-interest
is the valid end of all actions

And, here's a mis definition that I found for egoism:

Attempting to get personal recognition for oneself

Sounds more to me like just the opposite of egoism, since the egoist
is to concerned with verself to care what others think (this is one
of those qualities that I think that AIs should develop and that I
have, but SL4 probably isn't the place for me to get into a
discussion of what characteristics I think that intelligences should
have).

>no explanation of how it slips
>into the mind...

Egosim would be built in, like Friendliness.

>how is this more than yet another case of "AIs must share
>*my* philosophy"?

It's a philosophy that has a relation to Friendliness, but part of
the reason that I like Friendliness now is that I realize that it
takes this and other philosophies, either on purpose or not, and
develops a new one for AIs (well, I guess Friendliness would be good
for any SI) that is wiser to the realities of the world. At one
time, I would have written 'Let's make egoistic AIs' but now, after
much more thought and FAI, I'd write 'Let's make Friendly AIs'.

>As for the rest of this: I'm interested in humans, but then, I'm a
>human-level intelligence myself, so that level of complexity is an
>ultimate challenge rather than a trivial problem. But even I can see that
>the amount of complexity is both finite and understandable, and that once
>you're done, you're done. Leave that aside. Has it occurred to anyone
>that maybe it wouldn't be all that *pleasant* to live in a world where
>superintelligences were interested in people for purely utilitarian
>reasons? That, even if it were pleasant, it would probably still be
>pointless? That there's a world out there full of better possibilities?

Yes. To be honest, I have little interest in the vast majority of
human beings now, so after I'm uploaded, why would I start caring?

I should clarify that last statement by adding that a lack of
interest in humans is not a lack of interest in them so far as what
huanity might do to me or how I might use humanity to better myself.
I'm just not interested in what happens to humanity as a whole; just
me a couple of other people, though there are certainly some issues
where my interests match up with those of humanity (e.g. existential
disasters).

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT