Re: Maximizing vs proving friendliness

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Wed Apr 30 2008 - 18:53:53 MDT


--- Tim Freeman <tim@fungible.com> wrote:

> From: Matt Mahoney <matmahoney@yahoo.com>
> >I realize you can describe a simple guiding principle for
> Friendliness
> >such as "increase total human utility". But it is up to humans to
> >write the code that describes our utility function.
>
> This assumes that humans can't write fairly compact code that
> estimates the human utility function, given human behavior as input.
> I have a specification of that code, and it's fairly simple. You
> could execute it if you had a Python interpreter running on a
> more-than-astronomically fast computer. See
> http://www.fungible.com/respect/paper.html.
>
> If you wish to maintain your conclusion, you could argue that my spec
> is wrong, or that an implementation of it that really works on
> buildable hardware would be incomprehensibly difficult. I'd really
> like to see a good argument that my spec is wrong, so please go that
> way if you have a choice.

I believe your algorithm is correct. It looks like your program is
based on AIXI^tl: enumerate all utility functions on time bounded
Turing machines of increasing complexity until a machine is found that
predicts a training set of observed human behavior. Is this right?
Have you estimated its run time?

Earlier I estimated the human genome complexity is bounded by 10^7
bits. If we assume that 10% of it encodes the brain, then the
complexity of the "hardcoded" algorithm (complexity at birth) is 10^6
bits. Your program also has to guess the laws of physics (409 bits*)
as well as human behavior, so you need to test 2^1000409 choices.

> >You cannot have any help from AI because that would mean the AI is
> >helping to reprogram its own goals.
>
> You're splitting things into two pieces when you don't need to, and
> then arguing that each piece must precede the other so it can't be
> done. The two pieces are the AI writing code and the AI determining
> what its goals are. It is possible to solve both problems at once by
> putting the code generation into the utility function.

If you had enough computing power to run your algorithm, then you would
have solved AI long ago. A faster algorithm (only 2^818 steps*) would
be to enumerate all laws of physics until a universe supporting
intelligent life is found.

*I am assuming (1) this is how the universe was actually created, (2)
that the optimal time bound of a Turing machine of n bits is 2^n steps
by Schmidhuber's speed prior, (3) a universe whose state is described
using S bits cannot be simulated in less than S steps, (4) the fastest
algorithm for a given universe is most likely to be found first, (5) in
our universe, S = the Bekenstein bound of the Hubble radius = 2.91 x
10^122 bits ~ 2^409 bits. Thus, simulating 2^409 universes for 2^409
steps each takes 2^818 step.

Also, 409 bits seems to me about the right order of magnitude length of
a description of the fundamental laws of physics.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT