**From:** Ben Goertzel (*ben@goertzel.org*)

**Date:** Sun May 07 2006 - 11:28:33 MDT

**Next message:**Phillip Huggan: "Re: Changing the value system of FAI"**Previous message:**Jef Allbright: "Re: Changing the value system of FAI"**In reply to:**Eliezer S. Yudkowsky: "Re: Changing the value system of FAI"**Next in thread:**Phillip Huggan: "Re: Changing the value system of FAI"**Reply:**Phillip Huggan: "Re: Changing the value system of FAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Eliezer wrote:

*> The problem I haven't been able to solve is *rigorously* describing how
*

*> a classical Bayesian expected utility maximizer would make changes to
*

*> its own source code, including the source code that changes the code;
*

*> that is, there would be some set of code that modified itself.
*

*> Classical decision theory barfs on this due to an infinite recursion.
*

Hmm.... If you feel like taking the time to give more detail on this,

it might be interesting.

Maybe someone on the list will present a different angle on the

problem that will direct your thinking in a different (and useful)

direction... (hey, anything's possible ;-)

Semi-relatedly, it's perhaps worth laying out how a reasonably

powerful probabilistic reasoning system (not necessarily a classical

Bayesian expected utility maximizer) would confront this problem.

Presumably it would:

a-- create an approximative model of itself internally

b-- use this approximative model to carry out a series of hypothetical

inference trajectories of the form

"If I modified myself into system Y_i, then these are the likely

outcomes that would ensue."

c-- use the results of these inference trajectories to figure out how

it should modify itself

Now, b and c are mathematically and conceptually though not

pragmatically straightforward...

Regarding a, presumably, given fixed space and time resources, the

system can search through the space of approximative models and choose

the one that seems to give rise to higher-confidence results.... Or

it could do scientific experiments using various methods of

approximative-model generation to predict the outcome of modifying

simpler systems...

Presumably, a Bayesian expected utility maximizer would also end up

doing something very much like that I've described above ... but even

if so, proving this mathematically does not seem obvious, of

course....

-- Ben

**Next message:**Phillip Huggan: "Re: Changing the value system of FAI"**Previous message:**Jef Allbright: "Re: Changing the value system of FAI"**In reply to:**Eliezer S. Yudkowsky: "Re: Changing the value system of FAI"**Next in thread:**Phillip Huggan: "Re: Changing the value system of FAI"**Reply:**Phillip Huggan: "Re: Changing the value system of FAI"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:56 MDT
*