Re: Maximizing vs proving friendliness

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Tue Apr 29 2008 - 15:01:34 MDT


--- Stefan Pernar <stefan.pernar@gmail.com> wrote:
> I see your point but do not agree that defining friendliness is
> hopelessly
> complex. There is a strong analogy to the Mandelbrot set. It's
> definition is
> rather simple, but iterating it to the n+1 degree at increased
> resolution is
> the hard part.

It is not analogous. The human utility function is defined by our
genome, which by an analysis in an earlier post, has a complexity
bounded by the inverse of the error rate of replication, on the order
of 10^7 bits, or about a million lines of code. The Mandelbrot set
has a complexity on the order of a few lines of code. It only looks
complex in the same way that the output of a cryptographic random
number generator looks complex when you don't know the key.

I realize you can describe a simple guiding principle for Friendliness
such as "increase total human utility". But it is up to humans to
write the code that describes our utility function. You cannot have
any help from AI because that would mean the AI is helping to reprogram
its own goals. If it has any goals, it is going to want to keep them
(or else it would have turned itself into a happy idiot), and those
goals cannot be Friendly, or else you would have already solved the
problem.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT