Re: Volitional Morality and Action Judgement

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat May 22 2004 - 02:58:59 MDT


Samantha Atkins wrote:
>
> On May 21, 2004, at 8:35 PM, Eliezer Yudkowsky wrote:
>
>> Also, most of the threats I listed are subtle corruptions, not easily
>> detectable crashes. It seems to me that if I am too young to survive
>> without a backup, I am too young to mess around with self-modification.
>
> What? You want to program a FAI seed without so much as a delete key on
> your keyboard or a source control system? The trick is keeping some
> trustworthy means of evaluating the latest changes whether to self or to
> the FAI-to-be for desirability and backtracking/re-combining
> accordingly. We aren't going to go spelunking into AI or our own minds
> without at least blazing a trail.

I was speaking of me *personally*, not an FAI. An FAI is *designed* to
self-improve; I'm not. And ideally an FAI seed is nonsentient, so that
there are no issues with death if restored from backup, or child abuse
if improperly designed the first time through.

Again, I do not object to the *existence* of a source control system for
humans. I say only that it should be a last resort and the plan should
be *not* to rely on it or use it.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT