Re: Volitional Morality and Action Judgement

From: Randall Randall (randall@randallsquared.com)
Date: Mon May 24 2004 - 12:40:26 MDT


On May 24, 2004, at 1:08 PM, John K Clark wrote:

> On Sun, 23 May 2004 "Eliezer Yudkowsky" <sentience@pobox.com> said:
>
>> I think it might literally take considerably more caution to
>> tweak yourself than it would take to build a Friendly AI
>
> Why wouldn’t your seed AI run into the same problem when it tries to
> improve itself?

For the same reason that it's easier to predict the
consequences of changing something in the Linux
kernel than the consequences of making changes to
a bacterial genome, in general.

--
Randall Randall <randall@randallsquared.com>
'I say we put up a huge sign next to the Sun that says
"You must be at least this big (insert huge red line) to ride this 
ride".' -- tghdrdeath@hotmail.com


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST