**From:** Johnicholas Hines (*johnicholas.hines@gmail.com*)

**Date:** Mon Feb 16 2009 - 10:52:37 MST

**Next message:**Matt Mahoney: "Re: [sl4] 'Ethical' uploading"**Previous message:**Johnicholas Hines: "Re: [sl4] foundationalism"**In reply to:**Stuart Armstrong: "[sl4] another toy model of capability growth"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Mon, Feb 16, 2009 at 6:35 AM, Stuart Armstrong

<dragondreaming@googlemail.com> wrote:

*> Toy model 1: Lego
*

*> Then if S(i) > V(i), self improvement is possible - the AI is smart
*

*> enough to construct smarter machines than itself. If not, it stays
*

*> where it is.
*

I am not sure that using the words "volume" and "phase space" are

enlightening, in this case. Could we not use "difficulty" for "inverse

volume" and "capability" for "how small a volume in the space of

possibilities it can steer the future into"?

To redescribe the Lego model in these words:

There are two parameters to the model, both functions.

1. Difficulty of designing a mind of model number x (d(x)).

2. Capability of a mind of model number x (c(x)).

At a specific model number (y), if difficulty (d(y)) exceeds

capability (c(y)), then Lego predicts zero growth of intelligence. If

capability exceeds difficulty, then Lego predicts growth.

Substituting simple functions into the parameters, we can see some of

the possible behaviors that Lego predicts.

1. Both constant: Either zero growth no matter how much starting

engineering effort is applied, or continuous growth without need for a

starting push.

2. Difficulty constant, Capability linear: There will be a threshold,

zero growth below the threshold, continuous growth afterward.

3. Difficulty linear, Capability constant: This is a bit confusing,

because we have two notions of sophistication, model number, and

capability. Possibly we take "capability" to mean "capability of doing

mind design", and model number to mean "general purpose capability".

In this story, the model number grows for a while and then stops.

4. Difficulty linear, Capability linear: Just like 1, 2, or 3,

depending on whether the lines are parallel, slope of capability

exceeds slope of difficulty, or vice versa.

To get a prediction other than the few that have been mentioned so

far, we would need to have Capability and Difficulty intersect at two

points. Then we could have either:

5. Growth, followed by a region of zero growth, followed by another

region of growth.

6. Zero growth, followed by a region of growth up to some limit,

followed by another region of zero growth.

I think Lego adds some math to the intuition that "There is some kind

of intelligence threshold.". I would expect smooth rising functions

(to the extent that the model approximates reality at all), and I

wouldn't expect the curves to wobble back and forth across each other

in a complicated way, which makes a single threshold seem more likely.

However, I think the two notions of sophistication is a flaw in the

model. Possibly the difficulty function could be the difficulty of

designing a mind with capability x, rather than the difficulty of

designing a mind with model number x? I'm not sure how to formalize

that.

Johnicholas

**Next message:**Matt Mahoney: "Re: [sl4] 'Ethical' uploading"**Previous message:**Johnicholas Hines: "Re: [sl4] foundationalism"**In reply to:**Stuart Armstrong: "[sl4] another toy model of capability growth"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:01:04 MDT
*