From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Fri Apr 11 2008 - 19:03:09 MDT
Tim Freeman wrote:
> From: "Nick Tarleton" <email@example.com>
>> Utility is not just how good something feels, it's how good I
>> rationally judge something to be; it seems like I currently rationally
>> judge 2*N deaths (say) to be twice as bad as N deaths for all N, and I
>> would choose to modify myself to actually *feel* that difference and
>> eliminate scope insensitivity...
> That sort of altruism is exploitable even without considering absurdly
> improbable hells. All I need to do to exploit you is breed or
> construct or train a bunch of humans who want exactly what I want.
> It's even better if they'll commit suicide, or perhaps kill each
> other, if they don't get it. Then I provide evidence of this to you,
> and you'll want what I want.
> You need to fix the set of people you care about, rather than allow it
> to be manipulated by an adversary. You can't afford to give others
> the power to produce entities that you care about.
I bet if you tried hard enough, you could think of a better
decision-theoretic solution to the above problem than "fixing the set
of people you care about" - which it's already too late for me to do,
The Utility Function Is Not Up For Grabs
People sure are in a rush to hack the utility function all sorts of
ways... probably because they don't understand what this little mathy
object *means*; it's the set of things you really, actually care about.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT