Re: [sl4] AI's behaving badly (subtitle: There's more to me that utility - why there's society, and possibility too)

From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Mon Dec 08 2008 - 01:26:12 MST


On Sun, Dec 7, 2008 at 11:55 AM, Stuart Armstrong <
dragondreaming@googlemail.com> wrote:

> Summary: our short term desires are held in check by the possibilities
> in the world, so we overemphasise them. Our job can tell more about us
> than our everyday utility function. An AI observing us from our
> behaviour, would construct a utility function that overemphasises the
> short term even more, and completely denigrates the importance of our
> job (or our "position in society"). It would then make the wrong
> decisions. And, with our short term desires granted, we may change
> into beings we wouldn't want to become - because the AI will not
> manage the transition skillfully, because that is not its role, nor
> does it understand this transition in the way we do.
>
Your conclusion here is the same as mine through work with behavioural
psychotherapy.
We are so rigged for dealing with short-term goals that we often are
contraproductive in the long-term.

A person with OCD might check if he turned off the stove several hundred
times because each time he gets a little bit calmed down.
In the long therm however, spending 3 hours a day checking the stove makes
him think more about the possibility that he hasnt turned it off and raises
the anxietylevel.
I think the vast majority have similar behavioural patterns although not as
obvious.

Maybe a good solution would be for the AI to rig games for us that satisfy
our short-term goals where the results might be used for satisfying the AIs
(and all of civilizations) long-term goals - in something similar to the way
Luis von Ahn is doing it with gwap right now.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT