Re: Why is Friendliness sacrosanct?

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Aug 23 2002 - 22:39:09 MDT


Alden Streeter wrote:
> As humans, our interests have been shaped by nothing more than evolution.
>>From the strictly scientific viewpoint, our only reason for our existence
> (and that of all life) is to continue our existence by fulfilling our
> biological imperatives. Our intellectual persuits are not technically goals
> in and of themselves, but only adaptations of survival methods given to us
> by evolution, which still have the ultimate purpose to simply promote our
> continued existence.

This is not a "strictly scientific viewpoint". No "viewpoint"
is strictly scientific although it may be based on scientific
facts and theories. As soon as it becomes a viewpoint it starts
moving into the realms of philosophy and opinion. A statement
like "the only reason for our existence" is highly
over-inflated. We, as conscious beings, have more than a little
to say about what we will make the reason for our existence and
the goals of lives.

>
> The entire concept of a "Friendly" AI to me seems irrationally
> anthropocentric. Why should our human goals of survival take precident over

Well, do you care whether or not a super-intelligence you create
has any goals of protecting other sentient beings at all? If
not then this may say a lot about your personal values but it is
hardly a blanket condemnation, much less a "scientific" one of
the concept of Friendly AI or its importance.

> any of the AI's goals? Our human goals, including the ultimate goal of
> survival, as well as our subgoals (is it valid to apply such AI terminology
> to humans as well? I don't see why not) of happiness, pleasure, desire for
> knowledge, etc., were determined by our primitive evolution, and are
> ultimately determined by our physiology; so when we have the ultimate power

You are talking about how we got here, not where we go from here.

> to control our evolution in the future, instead of enhancing our ability to
> achieve those goals (apotheosis), why couldn't we instead just change the
> goals? But then the conundrum is that our existing goals should determine
> what future goals we should want to have instead, but if we change them,
> then we might not have wanted those new goals in the first place. So does
> that mean that we must be stuck with the primitive goals we have evolved?
>

Nope.

> So then the same question can be asked of a Sysop-level AI - instead of
> working to help humans to acheive their petty, primitive, evolutionarily
> determined goals, why not just use its power to change the humans so they
> have different goals? Shouldn't it, with its vastly superior intelligence,
> be able to think up better goals for the humans to have than the humans have
> thought of for themselves? And why should humans not want the AI to have
> this type of power? - if the AI changed their goals for them, they would of
> course immediately realize that their new goals were the right goals all
> along.

I don't consider the goal of continuous improving life to be in
the least "petty" or "primitive". Do you?

Do you believe it is the right of any brighter being that comes
along to rewrite all "lesser" beings in whatever matter it
choses or to destroy them? Do you believe it should be? If you
are helping to design such a being (or that which becomes such a
being) would you consider it just "petty" to look for a way to
encourage it to be a help to human beings rather their doom?

Do you believe it shows superior intellect to consider the well
being of humanity as mere petty pre-programmed meaninglessness?

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT