Re: Volitional Morality and Action Judgement

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun May 23 2004 - 16:38:46 MDT


Samantha Atkins wrote:
>
> On May 22, 2004, at 1:58 AM, Eliezer Yudkowsky wrote:
>
>> Samantha Atkins wrote:
>>>
>>> What? You want to program a FAI seed without so much as a delete
>>> key on your keyboard or a source control system? The trick is
>>> keeping some trustworthy means of evaluating the latest changes
>>> whether to self or to the FAI-to-be for desirability and
>>> backtracking/re-combining accordingly. We aren't going to go
>>> spelunking into AI or our own minds without at least blazing a
>>> trail.
>>
>> I was speaking of me *personally*, not an FAI. An FAI is *designed*
>> to self-improve; I'm not. And ideally an FAI seed is nonsentient, so
>> that there are no issues with death if restored from backup, or child
>> abuse if improperly designed the first time through.
>
> Funny, but we seem to have brains complex enough to self-improve
> extragentically and to augment ourselves in various ways. We also have
> the brains (we think) to build the seed of more complicated minds than
> our own. I don't see where we aren't designed to self-improve. The
> AI will be designed to do it more easily of course.

Having general intelligence sufficient unto the task of building a mind
sufficient to self-improve is not the same as being able to happily plunge
into tweaking your own source code. I think it might literally take
considerably more caution to tweak yourself than it would take to build a
Friendly AI, at least if you wanted to do it reliably. Unlike the case of
building FAI there would be a nonzero chance of accidental success, but
just because the chance is nonzero does not make it large.

That we can self-improve "extragenetically" is simply not relevant; that is
passing on cultural complexity which we *are* designed to do. The other
part of your analogy says, roughly speaking, human beings can (we hope)
become FAI programmers, therefore, they can rewrite themselves. Leaving
aside that this analogy simply might not work, it's a hell of a bar to
become an FAI programmer, Samantha, it's one hell of a high bar. Most
people aren't willing to put forth that kind of effort, and never mind the
issue of innate intelligence. There is also a strictness and caution,
which people are not willing to accept, again because it looks like work.
Here I am, who would aspire to build an FAI, saying: "Yikes! Human
self-improvement is way more dangerous than it looks! You've gotta learn a
whole buncha stuff first." And lo the listeners reply, "But I wanna
self-improve! Wanna do it now!" Which means they would go splat like
chickens in a blender, same as would happen if they tried that kind of
thinking for FAI.

I am not saying that you will end up being stuck at your current level
forever. I am saying that if you tried self-improvement without having an
FAI around to veto your eager plans, you'd go splat. You shall write down
your wishlist and lo the FAI shall say: "No, no, no, no, no, no, yes, no,
no, no, no, no, no, no, no, no, yes, no, no, no, no, no." And yea you
shall say: "Why?" And the FAI shall say: "Because."

Someday you will be grown enough to take direct control of your own source
code, when you are ready to dance with Nature pressing her knife directly
against your throat. Today I don't think that most transhumanists even
realize the knife is there. "Of course there'll be dangers," they say,
"but no one will actually get hurt or anything; I wanna be a catgirl."

> I do not see that it is ideal to have the FAI seed be nonsentient or
> that this can be strictly guaranteed. I don't see how it can be
> expected to understand sentients sufficiently without being or becoming
> sentient.

If you don't know how *not* to build a child, how can you be ready to build
one? Is it easier to design a pregnant woman than a condom? I am taking
the challenges in their proper order.

>> Again, I do not object to the *existence* of a source control system
>> for humans. I say only that it should be a last resort and the plan
>> should be *not* to rely on it or use it.
>
> OK, but I was objecting to lack of such for an FAI as you seem to
> believe you can think through the design issues so fully as to not need
> to backtrack. Many problems in a complex (even if mundane) system
> cannot be solved on paper or in one's head satisfactorily. They must
> be solved in the developing system itself. From some of your
> statements I am not sure you fully understand this.

Of *course* the FAI will have source control, and I'd prefer not to be
guilty of murder every time it's used.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST