Re: Bounded population (was Re: Bounded utility)

From: Nick Tarleton (nickptar@gmail.com)
Date: Sun Apr 13 2008 - 16:12:55 MDT


On Sun, Apr 13, 2008 at 6:28 AM, Vladimir Nesov <robotact@gmail.com> wrote:
> On Sun, Apr 13, 2008 at 8:57 AM, Nick Tarleton <nickptar@gmail.com> wrote:
> > Well, like he said, it's *what you care about*, which is not arbitrary
> > (or, rather, what you currently care about is contingent, but the fact
> > that you already have values means you can't pick any random
> > extrapolation), and which people have reasoned about for quite a
> > while. Even now, isn't it much more plausible that the more
> > intelligent, more rational, more knowledgeable you would have an
> > additive utility function over people than, I don't know, sinusoidal;
> > and that the same would be true of other humans?
> >
>
> I don't have an utility function. I have certain behaviors, that can
> be characterized by scope insensitivity and so on.

Yes, but do you *want* to be scope-insensitive? Does it make sense to be?

> How do I reliably
> increase my knowledge about what I "actually" care about?

Start by asking yourself what you currently would want to change about
your motivational system. Read moral philosophy, maybe.

My point is that the goal system of an FAI is not arbitrary - it's
tightly constrained by our current values and the values implicit in
the changes we would make to ourselves, and can't be arbitrarily
tinkered with to resolve paradoxes without serious thought.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT