Re: superintelligence and ethical egoism

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 03 2001 - 00:36:27 MDT


Sigh... no definition offered for egoism, no explanation of how it slips
into the mind... how is this more than yet another case of "AIs must share
*my* philosophy"?

As for the rest of this: I'm interested in humans, but then, I'm a
human-level intelligence myself, so that level of complexity is an
ultimate challenge rather than a trivial problem. But even I can see that
the amount of complexity is both finite and understandable, and that once
you're done, you're done. Leave that aside. Has it occurred to anyone
that maybe it wouldn't be all that *pleasant* to live in a world where
superintelligences were interested in people for purely utilitarian
reasons? That, even if it were pleasant, it would probably still be
pointless? That there's a world out there full of better possibilities?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT