RE: Hard takeoff [WAS Re: JOIN: Joshua Fox]

From: Olie L (neomorphy@hotmail.com)
Date: Wed Feb 08 2006 - 17:51:20 MST


>From: "H C" <lphege@hotmail.com>
>Subject: RE: Hard takeoff [WAS Re: JOIN: Joshua Fox]
>Date: Wed, 08 Feb 2006 20:35:54 +0000
>
>>but you can't use that fact to predict that it will escape to create a
>>hard take-off.
>>
>
>It sounds like you think hard take-off is bad or undesirable. The only
>situation where hard is less desirable than soft is when you are doing a
>crappy job of ensuring Friendliness. In which case you are probably screwed
>anyway.

It is my gut feeling (with a few supporting ideas) that a Friendly AI would
probably not create the hardest takeoff it could.

I 'suppose' that ve would most likely soften things a _lot_ for the sake of
other extant sentients

I will comment further on this anon, but I think that the principle of
"think before you act" gives a good hint of why.

>>Also,
>>
>>Computational resources are not the only limiting factor.
>>
>>Factors that influence how hard the takeoff "knee" is include:
>>
>>1) Computational resources
>
>really!?

I said "factors include", not "other factors". You were just searching for
something to criticise, and flailed wildly.

>>2) Other resources - particularly nanotech.
>> - it doesn't have to be replicators. Tunnelling electron
>>microscope-level nanotools etc will make it much easier for a "runaway AI"
>>to create replicators
>
>Why would nanotech be a necessary resource for hard take off, other than
>possibly for computational resources? It's wouldn't be.

Because it _determines_ availability of computational resources.

Nanotech is not necessary for awakening. Existing nanotech is also not
necessary for expanding computational resources, but will make a huge impact
on how things turn out a few "steps" of change beyond awakening.

Scenario 1: "Runaway AI 1" at 9AM is awake and has access to all computing
power on Interweb. It uses Nanolab to develop nano-assembly techniques. At
10AM, "runaway AI 1" has figured nano-assembly out, and sets about creating
nano-assemblers. At 10:30 AM, nano-assemblers are busily creating big
nano-computer. At 10:45, "Runaway AI 1" is now not only increasing its code
efficiency, but is also rapidly expanding its computational resources.

Scenario 2: "Runaway AI 2" at 9AM is awake and has access to all computing
power on Interweb. There are already nanoassemblers. At 9AM, "Runaway AI 2"
sets about using them to create a big nano-computer. At 9:15, "Runaway AI 2"
is now rapidly expanding its computational resources.

Two different hard takeoff scenarios. Same available computing resources,
same initial code efficiency. Different takeoff knees. Scenario 2 has a
much harder knee.

>>3) "first instance efficiency" - I know there's a better term, but I can't
>>remember it. If the first code only just gets over the line, and is slow
>>and clunky --> slower takeoff
>
>ie. need more computational resources.

Yes, that is the offset. My point is that there is more than one factor.

That is:

(takeoff hardness) is not only a function of (computational resources)

All other things being the same, there is a functional relationship, but
function might look more like

(Takeoff hardness) = derivative of {(initial efficiency)x(goals)+(additional
resources)}^(computational resouces).

That formula is probably very wrong - my math sucketh - but hopefully you
get my drift.

>>4) AI goals (how much it wants to improve)
>
>The only concievable case in which an AI's goals would limit its
>self-improvement would be some programmer enforced boxing, which is a bad
>idea in the first place.

How it goes about self-improvement is a limiting factor.

Converting all nearby matter to computronium-of-the-moment is the most rapid
way to self-improve in the short term.

Sitting back (self improving, without assimilating more resources), gives an
expantive AI time to think about which resources it needs to assimilate now,
which resources should be left till later, and which "resources" have merit
in existing untouched.

>Self-improvement is good for any goal in general.

Yes, but aquiring resources for improvement, although fast, is not
necessarily the best.

>In summary, if you have an intelligent system, hard take-off is both
>desirable and probable.

I refute that. Firstly, you haven't said "Friendly". Secondly, "hard
take-off" encapsulates a number of scenarios that, even if the AI is
friendly to sentients, are otherwise undesireable.

>where the necessary and sufficient factor is computational resources.

See above.

>Furthermore, the amount of computing power necessary for hard take-off is
>unknowable except with direct reference to the specifications of the actual
>intelligent system.

I concur.

-- Olie



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT