Re: Basement Education

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 19:34:28 MST


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > > Every minute that I ask an AI to deliberately
> > > delay takeoff puts another hundred deaths on *my* *personal*
> > > responsibility as a Friendship programmer.
> >
> > This is not balanced thinking. You are not personally responsible for
> > all the misery of the world. That you think you have a fix for a large
> > part of it, potentially, does not mean that delaying that fix for
> > safeties sake makes you responsible personally for what it may (or may
> > not) have fixed.
>
> The key word in that paragraph is "potentially". See below.
>
> > > In introducing an artificial
> > > delay, I would be gambling with human lives - gambling that the
> > > probability of error is great enough to warrant deliberate slowness,
> > > gambling on the possibility that the AI wouldn't just zip off to
> > > superintelligence and Friendliness. With six billion lives on the line, a
> > > little delay may be justified, but it has to be the absolute minimum
> > > delay. Unless major problems turn up, a one-week delay would be entering
> > > Hitler/Stalin territory.
> >
> > No. It has to be enough delay to be as certain as possible that it will
> > not eat the 6 billion people for lunch. In the face of that as even a
> > remote possibility there is no way it is sane to speak of being a Hitler
> > if you delay one week. Please recalibrate on this.
>
> If I take a vacation to decompress, *today*, I don't feel guilty; that
> comes under the classification of sane self-management. Doing a one-week
> delay *after* the AI reaches the point of hard takeoff... I guess my mind
> just processes it differently. It's like the difference between saying
> that "ExI is a more effective charity than CARE", and actually looting
> CARE's bank account. Logically, giving eight dollars of your money to ExI
> instead of CARE should have the same consequences as stealing eight
> dollars from CARE instead of giving it to ExI... but, morally, that's not
> how it works.
>

After hard take off point won't it be irrelevant whether any mere
mortal, even yourself, takes a week off or not?

I am really confused by your analogy. Logically not giving your money
to X is not the same at all as stealing from X since the concept of
stealing requires taking something from its rightful owner. But X
doesn't own your money. You do.

> Before the AI reaches hard takeoff, it's your time that you're investing
> in the AI, to the benefit of everyone in the world perhaps, but yours to
> invest in whatever payoff-maximizing strategy seems best. After the AI
> reaches the potential for hard takeoff, it's *their* time - and lives -
> that you're stealing.
>

I still don't get it. You are not synonymous with the SI. You still
don't know the SI will actually be a net salvation for humanity. It is
pointless to see this as stealing or in such moralistic terms. If you
do not take extremely reasonable care then you will be unleashing mass
destruction instead of mass salvation. That is a moral responsibility I
understand.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT