Re: A position

From: Jimmy Wales (jwales@aristotle.bomis.com)
Date: Wed May 23 2001 - 22:56:38 MDT


Eliezer S. Yudkowsky wrote:
> I don't see why you think this makes volition-based Friendliness
> impossible as an ethical primary. If you have a quantitative (or
> comparative) measure of how well a local event violates or matches
> "respecting the wishes of others", then that plus standard intelligence
> allows for a good stab at global optimization.

I'm not sure what you are asking. Or, I'm not sure you've understood
what I've attempted to say. :-)

I don't think Friendliness should be equated with 'altruism'.

And, separately, I don't think that "respecting the wishes of
others" is a useful ethical primary simply because, as you note,
it requires _measure_ to be applied. So your measure is the
real primary being proposed.

And, separately from that, I think that a superintelligence that
can reprogram itself will quickly discard Friendliness anyway,
becoming instead an ethical egoist. I don't think we should fear
this, by the way. We should hope for it. But my reasons for thinking
this are perhaps beyond the cope of this list.

--Jimbo

-- 
*************************************************
*            http://www.nupedia.com/            *
*      The Ever Expanding Free Encyclopedia     *
*************************************************


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT