From: xgl (email@example.com)
Date: Sun Jan 28 2001 - 10:22:29 MST
On Sun, 28 Jan 2001, Eliezer S. Yudkowsky wrote:
> I guess you'd better figure out how to use directed evolution and
> externally imposed selection pressures to manipulate the fitness metric
> and the basins of attraction, so that the first AIs capable of replication
> without human assistance are Friendly enough to want to deliberately
> ensure Friendliness in their offspring. Frankly I prefer the Sysop
> (singleton seed AI) scenario; it looks a *lot* safer, for reasons you've
> just outlined.
hmmm ... if we try to get friendliness by directed evolution,
wouldn't friendliness end up implicitly as a subgoal of survival? isn't
that, like, bad?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT