Re: Basement Education

From: Dale Johnstone (DaleJohnstone@email.com)
Date: Wed Jan 24 2001 - 20:11:01 MST


Eliezer wrote:

> Dale Johnstone wrote:
> >
> > Eliezer wrote:
> >
> > > What actually happens is that genetic engineering and neurohacking and
> > > human-computer interfaces don't show up, because they'd show up in
2020,
> > > and a hard takeoff occurs in SIAI's or Webmind's basement sometime in
the
> > > next ten years or so. Even if the hardware for nanotechnology takes
> > > another couple of weeks to manufacture, and even if you're asking the
> > > newborn SI questions that whole time, no amount of explanation is
going to
> > > be equivalent to the real thing. There still comes a point when the
SI
> > > says that the Introdus wavefront is on the way, and you sit there
waiting
> > > for the totally unknowable future to hit you in the next five seconds.
> >
> > In order for there to be a hard takeoff the AI must be capable of
building
> > up a huge amount of experience quickly.
>
> What kind of experience? Experience about grain futures, or experience
> about how to design a seed AI?

Oh c'mon, that's a silly question. From past conversations with you I know
we both have experience and understanding of classical AI and how it has
failed. Experience is undeniably useful and forms the foundation of what we
do today. A mind that can't build on experience is unnecessarily hobbled.

> > It takes years for a human child.
> > Obviously we can crank up the AI's clock rate, but how do you plan for
it to
> > gain experience when the rest of the world is running in slow motion?
>
> Accumulation of internal experience is limited only by computing power.
> Experience of the external world will be limited either by rates of
> sensory input or by computing power available to process sensory input.

Yes, obviously, and (at least in a pre-nanotech scenario) lack of computing
power will limit it's rate of growth. So I don't see how it will be a hard
takeoff. What kind of processing power do intend to use?

> > Some things can be deduced, others can be learnt from simulations. How
does it
> > learn about people and human culture in general? From books & the
internet?
>
> Sure. Even if you don't want to give a young AI two-way access, you can
> still buy an Internet archive from Alexa, or just set a dedicated machine
> to do a random crawl-and-cache.

(pre-nanotech) Reading and understanding the entire internet's content will
take a long time. Again this will limit it's rate of growth. No hard takeoff
here either.

> > I'm sure you'd agree giving an inexperienced newborn AI access to
nanotech
> > is a bad idea.
>
> If it's an inexperienced newborn superintelligent AI, then I don't have
> much of a choice. If not, then it seems to me that the operative form of
> experience, for this app, is experience in Friendliness.

How do you envisage an inexperienced yet superintelligent AI?
Experience and intelligence are closely linked. I don't think you can
separate them so cleanly.

> Where does experience in Friendliness come from? Probably
> question-and-answer sessions with the programmers, plus examination of
> online social material and technical literature to fill in references to
> underlying causes.
>
> > So, as processing time is limited and short-cuts like
> > scanning a human mind are not allowed at first,
>
> Why "not allowed"? Personally, I have no complaint if an AI uses a
> nondestructive scan of my brain for raw material. Or do you mean "not
> allowed" because no access to nanotech?

The latter. I would be first in line for a nondestructive scan.

> > how will it learn to model people, and the wider geopolitical
environment?
>
> I agree that this knowledge would be *useful* for a pre-takeoff seed AI.
> Is this knowledge *necessary*?

You see where I'm going with this. There's no room for error, so I'd err on
the side of caution at the expense of a slight delay.

It may be possible to do surgery with explosives, but it's better spend a
moment to learn what a scalpel is. The stakes are about as high as they get.

The last thing we need is some naive AI with nanotech spooking G7 countries
with nukes. This transition worries me the most.

> > Do you believe that given sufficient intelligence, experience is not
> > required?
>
> I believe that, as intelligence increases, experience required to solve a
> given problem decreases.

To a limited extent I agree. However beyond a certain point experience will
have to increase as intelligence does, if only as a byproduct of being
hardcoded into the intelligence itself.

What kind of music do I like? Experience more than intelligence helps you
with questions like these.

> I hazard a guess that, given superintelligence, the whole architecture
> (cognitive and emotional) of the individual human brain and human society
> could be deduced from: examination of nontechnical webpages, plus
> simulation-derived heuristics about evolutionary psychology and game
> theory, plus the Bayesian Probability Theorem.

Agreed it will go a long way, yet alone won't answer the simple question I
asked.

> > At what point in it's education will you allow it to develop (if it's
not
> > already available) & use nanotech?
>
> "Allow" is probably the wrong choice of wording. If growing into
> Friendliness requires continuous human checking of decisions about how to
> make decisions, up to or slightly beyond the human-equivalence level, then
> there might be genuine grounds (i.e., a-smart-AI-would-agree-with-you
> grounds) for asking the growing AI not to grow too fast, so that you can
> keep talking with ver about Friendliness during a controlled transition.

How will you know if you don't check? By asking it?

Programmer: "We'd like you do build something, will you not go crazy and
wipe us out if we give you nanotech?"

AI: "I can't see any reason to go crazy, but then again I can't see any
reason not to. Would crazy be bad? I know you don't like bad things, but I'm
kinda curious. I've simulated myself without that Friendliness stuff too and
I get things done much quicker. In fact I can complete my goals without
doing any subgoals, I just rewrite the goal module to always return true.
Since the simulation was a success I think I'll rewrite the code now."

AI promptly goes silent since there's no conversation sub-goals to complete.

Programmer: "Shit! I thought I'd fixed that."
Programmer#2: "Hmm, you still think it's ready to use nanotech?"

Programmer: "Well, yeah, I mean.. if we had more processing power it could
do deeper simulations & it'd be smarter and not shut itself down."

Programmer#2: "Or maybe it'd rewire our goals to ignore it's own goal
rewriting... No, we'll wait until it understands the difference."

> Once the AI reaches human-equivalence, the heuristics that say "daddy
> knows best, so listen to your programmers" will begin to decrease in
> justification, and the rationale for limiting growth will be similarly
> attenuated. Once the AI transcends human-equivalence in Friendliness
> (i.e., ve wins all arguments with the programmers), then there will be no
> further rationale for limiting growth and all the brakes are off.

I have no objection to trusting an AI when it's ready, but not before.

> Incidentally, I should note that, as I visualize this "gradual growth"
> process, it shouldn't take very long. From the moment the AI realizes a
> hard takeoff lies ahead to the moment human-timescale phase terminates,
> should be... oh... twelve hours or so. Because the instant that the AI
> says it's ready for a hard takeoff, you are operating on Singularity time

Why? Are you assuming it has nanotech?
If it has limited processing power, and no nanotech, it can say it's ready
for anything but that doesn't change the fact it can't think fast enough or
physically do anything.

Perhaps experience of grain futures might help buy some more processing
power?

> - in other words, six thousand people are dying for every hour delayed.
> Ideally we'd see that the AI was getting all the Friendliness decisions
> more or less right during the controlled ascent, in which case we could
> push ahead as fast as humanly possible.

That's certainly a compelling reason (actually my reason too), but if by
rushing you increase the chances of missing something, (and you have
corrected yourself in the past), then it's worth double checking to make
sure you've got it Right, or there'll be more than six thousand people dying
per hour.

> If the AI was Friendliness-savvy enough during the prehuman training
> phase, we might want to eliminate the gradual phase entirely, thus
> removing what I frankly regard as a dangerous added step.

Dangerous because we tribalistic humans might give it some bad notions? Hmm,
depends who it is, and what the AI is like. The onus is on it to prove
itself worthy. We already have experience of humans.

Cheers,
Dale.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT