Re: Basement Education

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 02:23:38 MST


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >

> > You certainly do have a choice. If you do not hook the system up in such
> > a way that it controls hardware manufacturing at all levels until it is
> > a bit more seasoned, that would be a quite prudent step.
>
> Prudent, maybe; effective, almost certainly not. A superintelligence has
> access to *me*. Ve has access to external reality... would ve really
> notice all that much of a difference whether the particular quark-swirls
> ve contacts are labeled "hardware manufacturing" or "Eliezer Yudkowsky"?
>

Are you assuming this SI is so intelligent to be able to reach hardware
manufacturing facilities by some unknown means BEFORE it has developed
enough to be trusted? If so then we are pretty thoroughly screwed. Yes?

It depends on the level of access to *you*.

> If you're gonna win, win *before* you have a hostile superintelligence on
> your hands. That's common sense.
>

I assume it is not hostile at this point but simply inexperienced and
likely to make devastating errors of judgement.

> > > Where does experience in Friendliness come from? Probably
> > > question-and-answer sessions with the programmers, plus examination of
> > > online social material and technical literature to fill in references to
> > > underlying causes.
> >
> > That would not be enough to develop common sense by itself. Too much is
> > assumed of the underlying presumed human context in the literature.
>
> I think you're wrong about this.
>

Really? How many AI systems do you know of that can read texts for
humans, especially in sociology and history, and make sense of them?
How many can parse even the most technical books and abstract useable
information?

> > I think you are guessing wrong unless quite a bit of the detailed common
> > sense is developed or entered before the young AI goes off examining
> > papers and running simulations. Knowing the architecture of human minds
> > is not sufficient for having working knowledge of hwo to deal with human
> > beings.
>
> Well, I disagree. In my own experience, the amount of real-world
> experience needed decreases pretty sharply as a function of the ability to
> theorize about the causation of the observed experiences you already have.
>

But your own experience is still as a human with quite a bit of
developmental RT work in acquiring this common sense. I don't think you
can reliably reason from your own introspection to predictions what the
young SI will experience or what will or will not be adequate for
understanding to this degree.

> >
> > How can human programmers can answer a sufficient number of the AIs
> > questions in a mere 12 hours?
>
> If the human programmers need to provide serious new Friendship content
> rather than just providing feedback on the AI's own actions, then one may
> be justified in going a little slower. If the AI is getting everything
> right and the humans are just watching, then zip along as fast as
> possible.
>

For humans to really evaluate would probably take longer. Within reason
we have to err on the side of caution.

> > AI time is not the gating factor in this
> > phase. And there is no reason to rush it. So many people dying per
> > hour is irrelevant and emotionalizes the conversation unnecessarily.
> > Letting the AI loose too early can easily terminate all 6 billion+ of
> > us.
>
> Yes, that is the only reason why it makes sense to take the precaution at
> all. I do not believe that so many people dying per hour is
> "irrelevant". I think that, day in, day out, one hundred and fifty
> thousand people die - people with experiences and memories and lives every
> bit as valuable as my own.

Tragic yes. But irrelevant to deciding how much to hurry a very
dangerous development out the door.

>Every minute that I ask an AI to deliberately
> delay takeoff puts another hundred deaths on *my* *personal*
> responsibility as a Friendship programmer.

This is not balanced thinking. You are not personally responsible for
all the misery of the world. That you think you have a fix for a large
part of it, potentially, does not mean that delaying that fix for
safeties sake makes you responsible personally for what it may (or may
not) have fixed.

> In introducing an artificial
> delay, I would be gambling with human lives - gambling that the
> probability of error is great enough to warrant deliberate slowness,
> gambling on the possibility that the AI wouldn't just zip off to
> superintelligence and Friendliness. With six billion lives on the line, a
> little delay may be justified, but it has to be the absolute minimum
> delay. Unless major problems turn up, a one-week delay would be entering
> Hitler/Stalin territory.
>

No. It has to be enough delay to be as certain as possible that it will
not eat the 6 billion people for lunch. In the face of that as even a
remote possibility there is no way it is sane to speak of being a Hitler
if you delay one week. Please recalibrate on this.

> > > If the AI was Friendliness-savvy enough during the prehuman training
> > > phase, we might want to eliminate the gradual phase entirely, thus
> > > removing what I frankly regard as a dangerous added step.
> >
> > How does it become dependably Friendliness-savvy without the feedback?
> > Or do I misunderstand what gradual phase you want to eliminate?
>
> I think so - the scenario I was postulating was that the AI became
> Friendliness-savvy during the pre-hard-takeoff phase, so that you're
> already pretty confident by the time the AI reaches the hard-takeoff
> level. This doesn't require perfection, it just requires that the AI
> display the minimal "seed Friendliness" needed to not take any precipitate
> actions until ve can fill in the blanks by examining a nondestructive
> brain scan.
>

Exactly what brain will we trust to be scanned into the SI? Wouldn't it
also pick up a lot of human traits that might not be so wonderful in
such a super-intelligence? Remember the old Star Trek episode with the
super-computer imprinted from its inventor's mind?

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT