Re: Maximizing vs proving friendliness

From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Tue Apr 29 2008 - 18:44:19 MDT


On Wed, Apr 30, 2008 at 5:01 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:

> --- Stefan Pernar <stefan.pernar@gmail.com> wrote:
> > I see your point but do not agree that defining friendliness is
> > hopelessly
> > complex. There is a strong analogy to the Mandelbrot set. It's
> > definition is
> > rather simple, but iterating it to the n+1 degree at increased
> > resolution is
> > the hard part.
>
> It is not analogous. The human utility function is defined by our
> genome, which by an analysis in an earlier post, has a complexity
> bounded by the inverse of the error rate of replication, on the order
> of 10^7 bits, or about a million lines of code. The Mandelbrot set
> has a complexity on the order of a few lines of code. It only looks
> complex in the same way that the output of a cryptographic random
> number generator looks complex when you don't know the key.
>

I see you did your homework on overcomingbias.com ;-) Let me make a couple
of points.

Firstly reducing human complexity to its genetic complexity would be
ignoring cognitive complexity which can be argued is at least 3-4 orders of
magnitude greater (see http://www.jame5.com/?p=26)

Secondly the human genome/memome does not represent a human's utility
function any more than the rendered Mandelbrot set represents its formula.
What it does represent is one of trillions of evolution's best current
guesses how to satisfy evolution's utility function.

I realize you can describe a simple guiding principle for Friendliness
> such as "increase total human utility". But it is up to humans to
> write the code that describes our utility function. You cannot have
> any help from AI because that would mean the AI is helping to reprogram
> its own goals. If it has any goals, it is going to want to keep them
> (or else it would have turned itself into a happy idiot), and those
> goals cannot be Friendly, or else you would have already solved the
> problem.
>

Well - I am not ready to fully argue my point yet, but there is a third
method on how to create an AI that would follow neither method. For a very
preliminary sketch on how this could be done you can visit
http://rationalmorality.info/wiki/index.php?title=Guido_Borner_Project

Kind regards,

Stefan

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT