Re: Basement Education

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jan 29 2001 - 20:24:00 MST


Samantha Atkins wrote:
>
> "Eliezer S. Yudkowsky" wrote:
> >
> > Before the AI reaches hard takeoff, it's your time that you're investing
> > in the AI, to the benefit of everyone in the world perhaps, but yours to
> > invest in whatever payoff-maximizing strategy seems best. After the AI
> > reaches the potential for hard takeoff, it's *their* time - and lives -
> > that you're stealing.
>
> I still don't get it. You are not synonymous with the SI. You still
> don't know the SI will actually be a net salvation for humanity. It is
> pointless to see this as stealing or in such moralistic terms. If you
> do not take extremely reasonable care then you will be unleashing mass
> destruction instead of mass salvation. That is a moral responsibility I
> understand.

Well, I could be wrong about this (that is, I find it very easy to imagine
changing my mind). Certainly if the *AI* says ve's not sure ve "gets"
Friendliness and wants to go slower, I wouldn't argue with that. But I'm
also instinctively prejudiced against the idea of abusing the AI's
(temporary) trust to slow down the Singularity out of sheer nervous
anxiety and failure of nerve. An artificial wait isn't risk-free.

I believe three things:

First, we should plan on winning this cleanly, completely, and with huge
margins of error. When this is over, and some superintelligence tots up
my "score" a la Infocom, I want to hear that I could have spent one third
the effort on Friendship content and still done just fine, and that the
project never came anywhere near failing.

Second, it is never possible to completely eliminate anxiety, and so the
mere cognitive presence of anxiety is not an adequate rationale for taking
some even riskier action just to discharge the anxiety, just to be "doing
something about it". I think this is a force often underestimated in
human psychology.

Third, no matter what goes wrong or why, it will be my responsibility.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT