Wanting vs Happiness (was Re: AI Boxing: http://www.sl4.org/archive/0207/4977.html)

From: Tim Freeman (tim@fungible.com)
Date: Wed Jun 11 2008 - 07:35:42 MDT


From: "Peter C. McCluskey" <pcm@rahul.net>
> But it isn't clear that "wants" refers to a logically consistent concept,
>much less a concept that is simple in all circumstances. For example, it's
>possible to create conditions under which asking a person about his
>happiness during an experience reveals preferences which differ from
>the preferences revealed by asking how he remembers it afterward (see
>the book Stumbling Upon Happiness for more on this subject).

The example isn't relevant. I'm talking about what people want. I'm
not talking about what makes them happy, what they say makes them
happy, or anything having to do with their memory. I agree that all
of those other things are not simple. Emotions aren't part of
defining what people want because I don't need to make statements
about someone's emotions to say that from their actions it appears
that they are trying to get the world into one state rather than
another. This stand makes it possible to ascribe purpose to things
that don't have emotions, such as plants and chess playing robots.
"The vine is growing that way to get more sunlight."

To figure out what someone wants, you have to first estimate:

* what voluntary actions they are taking and have taken, and
* what they believe (that is, their estimated state of the universe), and
* how cause-and-effect works.

There is uncertainty in all of these things: we may not know what they
did, the person may not have a firm opinion about what is true at any
moment, and the best-guess laws of physics may be either not known
with certainty or inherently nondeterministic.

We represent what they want as a utility function mapping a
world-state to a utility. "What they want" is any utility function
that explains their actions in the sense that their actions, if taken
in the world they believe themselves to be in, gives an optimal value
for their utility.

The judgement of what they want has uncertainty as well, both because
the output of the algorithm inherits uncertainty from the inputs, and
because even if the inputs are known with certainty there will in
general be multiple utility functions that explain the behavior.
Prefer the simpler utility functions in the usual way, based on a
simplicity metric. If you want a decision procedure, use the speed
prior, or if uncomputability is acceptable use the universal prior.

Details are on my website. The a-priori likelihood stuff is from
Schmidhuber, Hutter, and Kolmogorov before them.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT