Re: An essay I just wrote on the Singularity.

From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 14:08:23 MST


Tommy McCabe <rocketjet314@yahoo.com> writes:
> --- Mitchell Porter <mitchtemporarily@hotmail.com>
> wrote:
>>
>> >Survival? If the first transhuman is Friendly,
>> >survival is a given, unless you decide to commit
>> >suicide.
>>
>> Or unless it thinks you're better off dead.
>> http://www.metu.edu.tr/home/www41/eda.doc
>
> If it thinks you're better off dead, either 1), it is
> for such a compelling reason that you agree and commit
> suicide, or 2), the AI is unFriendly. Wouldn't you
> call an AI that decided that someone should be dead
> for no good reason unFriendly?

But what if, according to some quite reasonable set of rules, it
appears that you would be better off dead? And, even worse, what if it
appears that the only reasonable way to defend group A is to kill some
members of group B -- what we call a war?

Certainly under some circumstances it might be necessary to
kill. Alien invaders may wish to take over our resources, or one group
of people might decide to conquer the others and it might not be easy
to stop them without force.

One can, of course, resort to a "moral system" to try to determine
what the right thing to do is -- but humans have been unable to agree
on such a thing after thousands of years of trying, and we appear to
be making little progress on constructing one.

It is issues like this, among others, that make me feel the notion of
"Friendliness" is far too simplistic.

Perry



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT