From: Thomas McCabe (firstname.lastname@example.org)
Date: Fri Apr 11 2008 - 19:45:02 MDT
On Thu, Apr 10, 2008 at 10:23 AM, Tim Freeman <email@example.com> wrote:
> From: "Nick Tarleton" <firstname.lastname@example.org>
> >Utility is not just how good something feels, it's how good I
> >rationally judge something to be; it seems like I currently rationally
> >judge 2*N deaths (say) to be twice as bad as N deaths for all N, and I
> >would choose to modify myself to actually *feel* that difference and
> >eliminate scope insensitivity...
> That sort of altruism is exploitable even without considering absurdly
> improbable hells. All I need to do to exploit you is breed or
> construct or train a bunch of humans who want exactly what I want.
> It's even better if they'll commit suicide, or perhaps kill each
> other, if they don't get it. Then I provide evidence of this to you,
> and you'll want what I want.
> You need to fix the set of people you care about, rather than allow it
> to be manipulated by an adversary. You can't affort to give others
> the power to produce entities that you care about.
> Tim Freeman http://www.fungible.com email@example.com
What would *you* do, if you were faced with this scenario? What would
be the best course of action?
As an aside, this hack has already been exploited, albeit on a smaller
scale. If you have lots of kids, and you don't have a lot of money,
the government will give you money to help out the kids. Hence, some
people deliberately choose to have lots of kids, in order to collect
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT