**From:** Paul Fidika (*fidika@gmail.com*)

**Date:** Thu Oct 21 2004 - 14:38:54 MDT

**Next message:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Previous message:**Jeff Medina: "Re: [agi] A difficulty with AI reflectivity"**In reply to:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Next in thread:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Well I'm afraid your problem is a bit beyond me at the moment, but

after thinking about it for two days here are my thoughts.

First of all, it may be helpful to rephrase and clarify Eliezer's

problem in more concrete terms. (Disclaimer: the following discussion

makes use of Algorithmic Information theory, a topic which I barely

understand, and as such may be incorrect.) Suppose we have a Seed-AI

and a particular problem which we want the Seed AI to solve for us,

namely:

(P) Find and output (and only output) the first random bit-string of

length n, and then halt,

where by "first" we mean the usual ordering over binary strings, and

by "random" we mean random in the sense of incompressibility; a

bit-string B of length n is random if the Kolmogorov-complexity of the

smallest program which prints B is greater than or equal to n.

(Intuitively this means that the smallest program which prints B is

just the program "print 0011101…", where 0011101… is B.)

Now our problem P is very simple and may be encoded in a fixed number

of bits, so that the complexity of P, that is, K(P), is a fixed

constant. Our Seed AI can also be encoded in a fixed number of bits,

K(S), and n (the n made reference to in P) may be encoded in log(n)

bits. Now choose an n such that

n > K(S) + K(P) + log(n).

Such an n exists since K(S) and K(P) are both constants. Now if our

Seed AI were to output a bit-string as an answer to P, then our Seed

AI would be provably wrong, because, by definition, the bit-string

being asked for by P must be of complexity n or greater, whereas we

have program of complexity less than n which has printed it out,

meaning the string our Seed AI has printed out in fact has complexity

less than n, and hence cannot be the string requested in P. Thus we

have constructed a problem which is IMPOSSIBLE for our Seed AI to

solve, NO MATTER WHAT. Our Seed AI may shuffle around its bits all it

wants, it will be to no avail.

Notice that this construction works for ANY computable-intelligence;

given any computable-intelligence, we can ALWAYS construct a problem

which this intelligence provably cannot solve. (Yes Penrose, that

INCLUDES humans.) A few properties of P are worth noting:

-There are a finite number of candidate-solutions to P (2^n, to be exact).

-A unique solution exists.

-If we weaken P to "print any (but only) one random binary string of

length n", then most of the candidate-solutions are in fact solutions

to this weakened P (most strings of length n are random)

-Whatever bit-string the Seed AI returns, it will provably be the incorrect one.

-P can be thought of as a type of Gödel sentence.

The problem is that our Seed AI is of fixed complexity, and cannot

increase its complexity by internally rewriting its program (though it

can decrease its complexity), and its current complexity is not

sufficient to solve P. What if however our Seed AI turns to the

environment to gain complexity? Suppose that our environment contains

some complicated mechanism which proposes possible changes to the Seed

AI architecture; the Seed AI reviews the proposed architecture, and

then either decides (in a finite amount of time) whether or not to

change to the suggested architecture or stay the way it is. One such

proposed architecture-change must be powerful enough to solve a given

instance of P, so all our Seed AI needs to do is be capable of

recognizing such an architecture when it sees one (it needn't be able

to generate that architecture internally). Suppose that whenever our

Seed AI accepts an architecture change when considering solving an

instance of P, then this new architecture can indeed solve P (note

that the converse needn't hold—our Seed AI may reject architectures

which would have otherwise solved the instance of P, and the following

argument holds even if you insert a finite number of meta-architecture

changes, that is you propose changes to the Seed AI which will enable

it to better recognize architectures useful for better recognizing

architectures useful for solving P, and so on).

If our Seed AI did accept some such architecture change, changed to

the new architecture, and correctly printed out the string requested

by P, then, prima facie, there appears to be no contradiction, because

K(S) + K(A), where A is the new architecture, may be much larger than

n. However, then we can create a program G which takes our Seed AI and

proposes to it EVERY possible architecture-change in some standard

order of enumeration, and then, if the Seed AI accepts a new

architecture, this new architecture is run and presumably prints the

string requested by P. But this program can also be specified in K(G)

bits, so for all n > K(S) + K(G) + K(P) + log(n) not only can our Seed

AI not solve P, our Seed AI is incapable of recognizing an

architecture which CAN solve P when it sees one.

Also, Marc Geddes's suggestion of weakening our requirements from "the

Seed AI outputs the string specified by P" to "the Seed AI outputs a

string which is probably the string specified by P" does not help at

all. If our Seed AI approximates the Kolmogorov-Complexity function

for all strings of length n (this is possible—though the

Kolmogorov-complexity function is uncomputable, it is semi-computable

from above), and, after a certain amount of time (as determined by the

AI), the Seed AI halts and outputs the string which is most likely to

be the string requested by P as determined by the

approximation-thus-far, then one would think that the Seed AI's answer

would become "more likely correct" if it spent a longer amount of time

approximating the Kolmogorov-Complexity function. But in fact whatever

string the Seed AI outputs will always be PROVABLY wrong, for the same

reasons mentioned above.

(Note: If however the time at which the Seed AI halts and outputs is

determined by some external mechanism of complexity greater than the

difference between n and K(S) + K(P) + log(n), it might be possible

for the Seed AI to output the correct string. Furthermore, if the Seed

AI is given a large enough random string as input initially, then it

is possible for the Seed AI to solve P by transforming this string

into a string of length n and outputting it, which might be the

correct answer. I'm not sure if, no matter how much random-input we

give the Seed AI, the Seed AI can do much better than this, that is,

much better than random-guessing, for an arbitrary instance of P. Any

ideas?)

Eliezer Wrote:

*> The unique, new problem comes when we ask the theorem prover to rewrite
*

*> itself entirely. Even if we adjoin to Theory-1 the assumption that
*

*> Theory-0 is consistent, if a Theory-1 prover were to write a provably
*

*> consistent (in Theory-1) prover, it would write a Theory-0 prover. This
*

*> prover would then be unable to approve any further rewrites.
*

*>
*

*> We may be able to rescue Schmidhuber's Godel Machine by compartmentalizing
*

*> it into an object system that provably has expected utility for solving
*

*> problems, and a meta-system that can only be rewritten if the rewrite
*

*> provably admits only theorems admissable in the original meta-system. That
*

*> is, we use *two different* criteria for modifying *two different*
*

*> components of the Godel Machine. I don't regard this as a good solution to
*

*> the deep AGI problem [...] It is furthermore unclear as to how a
*

*> rewrite of the meta-system would be adjudged *superior* to the prior
*

*> system, even if it were proven admissible.
*

Your problem appears to be that the Seed AI can never (internally)

increase its complexity, whereas it can decrease it. Also, the Seed AI

will not, in general, have sufficient complexity to recognize an

improvement when it sees one. This is a problem you fundamentally

cannot solve by introducing compartmentalizations or meta-levels into

the Seed AI.

I can think of at least two ways to possibly partially-solve this:

(1) Scale back your ambition; rather than worrying about problems such

as P, you could concentrate instead solely upon building a Seed AI

which can solve, and learn to solve, a broad, but not too broad, class

of problems, such as, say the problems in NP. The problem with P is

that there is apparently no way for the person requesting the string

to verify that the answer received is in fact correct…

(2) Forget some of the more extreme ambitions of Seed AI; a Seed AI

alone cannot always solely determine which changes will be useful, but

must rely upon the environment to choose (the Seed AI is borrowing

complexity from the environment to make its decision—this is essential

for problems such as P), i.e., a "hard-takeoff" would be impossible

due to the delay imposed by feedback from the environment.

Furthermore, the Seed AI may be incapable of doing much better than

evolution or random-guessing in some cases.

~Paul Fidika

**Next message:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Previous message:**Jeff Medina: "Re: [agi] A difficulty with AI reflectivity"**In reply to:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Next in thread:**Eliezer Yudkowsky: "Re: [agi] A difficulty with AI reflectivity"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:49 MDT
*