Re: Beyond evolution

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 28 2001 - 13:58:54 MST


Ben Goertzel wrote:
>
> > Right. So I reified the warmth, love & compassion into a philosophy of
> > symmetrical moral valuation of sentient entities, used the philosophy to
> > take cognitive potshots at all the emotions that didn't look
> > sentient-symmetrical, and it worked. How is this different from a
> > Friendly AI maintaining Friendship in the face of any
> > sentient-asymmetrical emergent forces that may pop up?
>
> It's different in two ways
>
> 1) Humans are fighting more negative emotions and intrinsic aggression, etc.
> than AI's will (as you've shown me)
>
> 2) Humans have more intrinsic warmth, compassion & passion toward other Ai's
> than AI's will
>
> So, compared to an AI, where friendliness is concerned you've got things
> going for you & things going against you...

My point is that, without benefit of self-modification, I routinely
maintain my declarative cognitive supergoals against evolutionary tensions
that run *far* higher in a human than they would in a Friendly AI.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT