Re: Bounded population (was Re: Bounded utility)

From: Tim Freeman (tim@fungible.com)
Date: Sat Apr 12 2008 - 08:24:46 MDT


>I bet if you tried hard enough, you could think of a better
>decision-theoretic solution to the above problem than "fixing the set
>of people you care about" - which it's already too late for me to do,

There's some ambiguity in what I said. In the phrase "fixing the set
of people you care about" I intended "fix" to mean "keep permanently
constant", not "repair".

> The Utility Function Is Not Up For Grabs
>http://www.overcomingbias.com/2008/01/newcombs-proble.html

If I'm correctly getting your point, you're saying that if you have
infinite resolution to your utility function, or an infinite planning
horizon, or an infinite number of potential people you're trying to
include in your altruism, or an infinite maximum utility, your values
have contradictions in them.

>People sure are in a rush to hack the utility function all sorts of
>ways... probably because they don't understand what this little mathy
>object *means*; it's the set of things you really, actually care about.

I can't find a sensible interpretation of this.

I could try to take you literally here. I'm human. Therefore I don't
really maximize a utility function. Therefore the above statement is
simply false. Gee, that didn't last long.

My most plausible figurative interpretation would literally be as
follows. You don't need to read the whole thing, the important
difference is enclosed in _..._'s:

   People sure are in a rush to hack the utility function all sorts of
   ways... probably because they don't understand what this little
   mathy object *means*; it's the set of things _a_rational_actor_we_
   might_construct_someday_ really, actually cares about.

It's entirely self-consistent for a rational actor to bound its
planning horizon, maximum utility, and the resolution of its utility
function. A rational actor that is more-or-less altruistic might even
fix the set of people it's being altruistic for, and regard children
born after it is constructed as a blob of protoplasm that the parents
care about, rather than as a separate entity that deserves care on its
own.

(There are other ways to deal with newborns.)

This is entirely consistent the rational actor "winning" according to
its own goals.

So far as I can tell, it makes perfect sense to hack the utility
function that way, but you're saying it doesn't. Why doesn't it?

I suppose I could try coming up with some more fanciful figurative
interpretation of what you said, but I sense diminishing returns. If
you want me to think you're saying something plausible, you'll have to
try harder.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT