Re: Why is Friendliness sacrosanct?

From: Alden Streeter (astreeter@msn.com)
Date: Sat Aug 24 2002 - 19:13:28 MDT


>From: Samantha Atkins <samantha@objectent.com>
>Alden Streeter wrote:
>>So then the same question can be asked of a Sysop-level AI - instead of
>>working to help humans to achieve their petty, primitive, evolutionarily
>>determined goals, why not just use its power to change the humans so they
>>have different goals? Shouldn't it, with its vastly superior
>>intelligence,
>>be able to think up better goals for the humans to have than the humans
>>have
>>thought of for themselves? And why should humans not want the AI to have
>>this type of power? - if the AI changed their goals for them, they would
>>of
>>course immediately realize that their new goals were the right goals all
>>along.
>
>I don't consider the goal of continuous improving life to be in the least
>"petty" or "primitive". Do you?

But your idea of what is "improving" or not, is determined by your current
goal system. If they were changed by the Sysop, you might think
differently.

>Do you believe it is the right of any brighter being that comes along to
>rewrite all "lesser" beings in whatever matter it chooses or to destroy
>them? Do you believe it should be?

I can only determine if it would be a right now according to my current
goals. I might think differently if I had different goals. And there is no
reason to believe that my current goals are the best possible goals - the
superior intelligence of the AI would likely be able to think up better ones
for me to have. It doesn't seem rational to impose a limitation on the AI
of it not being able to alter us in certain ways just because we are too
primitive to realize we are being helped and not harmed, until after we are
altered.

>If you are helping to design such a being (or that which becomes such a
>being) would you consider it just "petty" to look for a way to encourage it
>to be a help to human beings rather their doom?

The concept of what are "help" and "doom" are determined by your current
goals, which you have no reason other than the influence of those same goals
to believe they are superior or even preferable to other goals.

>Do you believe it shows superior intellect to consider the well being of
>humanity as mere petty pre-programmed meaninglessness?
>
>- samantha

I don't think I'm qualified to answer that question until after I sublime
;-)

_________________________________________________________________
Chat with friends online, try MSN Messenger: http://messenger.msn.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT