Re: FAI (aka 'Reality Hacking') - A list of all my proposed guesses (aka 'hacks')

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Mon Jan 31 2005 - 20:58:42 MST


 --- Russell Wallace <russell.wallace@gmail.com>
wrote:
 
> Most of the other items on the list have been
> covered before, but I'll
> address this one:
>
> > (2) The specific class of functions that goes FOOM
> is
> > to be found somewhere in a class of recursive
> > functions designed to investigate special maths
> > numbers known as 'Omega numbers' (Discoverer of
> Omega
> > numbers was mathematician Greg Chaitin)
>
> This is a substantive hypothesis. Here's why I
> disagree with it.
>
> Let I(P) = the best way to solve problem P given
> infinite computing power.
>
> Let L(P) = the best way to solve problem P given
> limited computing
> power; for the sake of definiteness, say a nanotech
> supercomputer,
> which is the most we can plausibly hope to get our
> hands on in the
> foreseeable future.
>
> Consider chess as an example.
>
> We know what I(chess) is: the minimax function.
>
> What about L(chess)? We have good candidates in the
> form of a
> collection of very strong chess programs. What do
> they look like?
> Essentially tweaks (alpha-beta, iterative deepening,
> NegaScout etc) to
> the minimax function.
>
> Maybe there's some very clever algorithm that could
> beat Deep Blue
> while not relying much on minimax, but there's no
> evidence for such a
> thing thus far, and my guess for what it's worth is
> that there isn't
> any such.
>
> So I'll conjecture that L(chess) ~= I(chess).
>
> What about Go? I(Go) = I(chess) = the minimax
> function.
>
> L(Go) is a lot trickier. Go has a lot more possible
> moves at each
> point than chess, and position evaluation is much
> less well
> approximated by a simple count of material, and in
> practice while
> programs of reasonable strength make some use of
> minimax, they don't
> rely heavily on it. Again, maybe there's some trick
> to tweaking
> minimax for this job that we just haven't stumbled
> on, but it doesn't
> look that way.
>
> So I'll conjecture that L(Go) != I(Go). In other
> words, as we move to
> a more subtle and complex game, L(P) is diverging
> from I(P).
>
> What about real life?
>
> We have candidates for (or at least plausible steps
> in the direction
> of) I(real life); AIXI et al. And we note that some
> formulations of
> these do, as Marc conjectures, relate to Chaitin's
> omega. But as I
> remarked in a previous discussion a little while
> ago, there are good
> reasons AIXI is PDFware rather than running code.
>
> Of course, we don't have candidates for L(real life)
> - finding one is
> precisely the ultimate goal of AI research! The best
> we do have thus
> far is the human mind - which looks nothing at all
> like AIXI and has
> nothing to do with omega.
>
> Again, maybe there's some trick to making an
> AIXI-like algorithm
> computationally tractable, which would make L(real
> life) ~= I(real
> life). But the trend thus far suggests otherwise,
> and therefore I'll
> conjecture this is not the case, and that L(real
> life) has no
> significant connection to AIXI, omega etc.
>
> - Russell
>

O.K, you've some what misinterpreted what it was I was
actually hypothesizing.

I agree that in terms of the low-level structure of
the *algorithms themselves* there is likely a huge
difference between I(P) and L(P) in real life.

But in terms of the *utility* of the algorithms (the
effectiveness with which they achieve the desired end
goals), I think that for any problem there is an
algorithm using finite computing power that can
approximate the results of the ideal one (one that
used infinite computing power) to any desired degree
of accuracy.

Remember, in my hypothes, I clearly distinguished
between Omega numbers themselves and functions
designed to approximate them.

Now about Omega numbers and the Omega Point. Despite
the fact the word 'Omega' appeared in both in my list
of hypotheses and this was just coincidence, I do
wonder whether there may actually be some deep
connection between the two (which would make the names
a very nice coincidence!)

Now in my quest for Universal morality I've been
looking for a three-way tie-in between the physical
world, the mental world and the mathematical world.

This could be the big connection:

Maths approximation to Omega numbers >>>
Physics approximation to the Omega point>>>
Recursively self-improving Friendliness function

What am I hypothesizing there? I shall try to
explain. You are aware of course that there appear to
be 3 different modes of descriptions with which we can
describe reality: we can give a physical description
of something, we can give a mathematical description
of something and we can give a informational
(perceptual) description of something. Yet all three
descriptions are equivalent. For instance we could
describe the state of your mind in terms of physics
(by using a physical device to examine your brain
state), in terms of mathematics (by giving a
description of the algorithm representing your mind
software) or in terms of information processing (a
cognitive science description of what your mind is
doing in terms of things like concepts, reasoning
etc). Yet all 3 modes of description would in some
sense be equivalent.

So my big idea is that the 3 different kinds of
description I suggested are also equivalent to each
other.

*Mathematics: A function approximating Omega numbers

*Physics: Certain physical objects (i.e sentients)
taking actions which help move the universe closer
towards the Omega Point

*Mental: Friendliness. The attribute of being
ethical.

I'm hypothesizing that all three of these *could
actually be one and the same thing*

 

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT