Re: Why extrapolate? (was Re: [sl4] to-do list for strong, nice AI)

From: Tim Freeman (tim@fungible.com)
Date: Mon Oct 26 2009 - 08:08:10 MDT


From: Matt Mahoney <matmahoney@yahoo.com>
>I know these topics have been discussed, but as far as I know they
>have not been answered in any way that settles the question of "what
>is friendly?"
>
>And this raises the question "what is happiness?" If happiness can be
>modeled by utility, then the AI can compute your utility for any
>mental state. It does a search, finds the state of maximum utility,
>and if your brain has been replaced with a computer, puts you directly
>into this state. This state is fixed. How does it differ from death?
>
>Or if utility is not a good model of happiness, then what is?

You have it backwards -- the AI should take utility into account and
it should not expect to increase happiness.

People have an equilibrium level of happiness. Except when a person
is having some sort of short-term survival threats, getting more of
what they want makes them happier for a brief time, and then their
expectations increase and their level of happiness returns to where it
was. Thus we shouldn't expect a functioning FAI to increase happiness
much after it stops the starvation, murder, rape, etc.

Another problem with happiness is that it can be manipulated
pharmacologically or potentially via brain surgery. If the AI tries
to make the world better according to utility functions it infers for
the humans before it takes action, it won't take the shortcut of
simply wireheading everybody to make them happy. (It might still
wirehead the people who want to be wireheaded, but that won't be most
people.)

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT