From: Vladimir Nesov (firstname.lastname@example.org)
Date: Sun Apr 13 2008 - 04:28:05 MDT
On Sun, Apr 13, 2008 at 8:57 AM, Nick Tarleton <email@example.com> wrote:
> On Sat, Apr 12, 2008 at 10:00 PM, Vladimir Nesov <firstname.lastname@example.org> wrote:
> > But how do we figure it out? Why must utility be, say, additive by the
> > number of people, as you suggested? Such rule sounds completely
> > arbitrary. World-with-humans should be somehow observed and
> > extrapolated, and principles of this extrapolation is an important
> > problem to figure out, but before that is done it's not clear what the
> > result should be.
> Well, like he said, it's *what you care about*, which is not arbitrary
> (or, rather, what you currently care about is contingent, but the fact
> that you already have values means you can't pick any random
> extrapolation), and which people have reasoned about for quite a
> while. Even now, isn't it much more plausible that the more
> intelligent, more rational, more knowledgeable you would have an
> additive utility function over people than, I don't know, sinusoidal;
> and that the same would be true of other humans?
I don't have an utility function. I have certain behaviors, that can
be characterized by scope insensitivity and so on. If future me is
going to start running an approximation of utilitarian algorithm with
particular utility function, it's a huge modification to what I am,
maybe such modification should be considered an 'unfriendly'
intrusion, one that is subtle enough to fool many people into
accepting it, even if it turns out to be a scam. How do I reliably
increase my knowledge about what I "actually" care about?
-- Vladimir Nesov email@example.com
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:07 MDT