Re: Singularitarian Principles

From: Gordon Worley (redbird@mac.com)
Date: Wed Mar 21 2007 - 12:27:58 MDT


On Mar 19, 2007, at 12:38 PM, Joshua Fox wrote:

> How should we respond to future cases where Singularitarian ends
> truly justify extreme means? This is a basic moral dilemma of all
> ideologies, especially those which claim to offer absolute welfare
> to humanity.

First, it depends on what you want to do. I might feel that we can
justify extreme means to bring about Friendly AI, while others may
feel the same way about creating any superintelligence, believing it
our duty to create the next great step forward in intelligence, even
if it kills us. So when you say Singularitarian keep in mind that
even such a small group is not unified in what it believes to be
desirable.

For those that support the creation of Friendly AI, I don't think
extreme means can every be justified. The whole point of Friendly AI
is that humans don't really know what is best, and even if we are
extremely confident that we do, we still want to be sure that, even
if the "bad guys" get the code, the AI will eventually turn out
good. Then we'll be able to say "I don't know; let's ask the
Friendly AI".

-- -- -- -- -- -- -- -- -- -- -- -- -- --
                Gordon Worley
e-mail: redbird@mac.com PGP: 0xBBD3B003
   Web: http://homepage.mac.com/redbird/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT