From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Jan 28 2001 - 12:13:10 MST
Ben Goertzel wrote:
> Pointing to Buddhism was just a way of saying that friendliness, in humans,
> does not inevitably seem to have learning & knowledge creation as subgoals
The globelike shape of the Earth, in humans, is not an inevitable
conclusion from satellite photos. That's why my original post specified
that it was an inevitable conclusion for *transhumans* only.
Do you seriously think that a Friendly AI which totally lacked the
behaviors and cognitive complexity associated with learning would be more
effective in making Friendliness real?
Ergo, the behaviors associated with learning are valid subgoals of
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:20 MDT