Pascal's Button

From: Nick Tarleton (nickptar@gmail.com)
Date: Mon Apr 07 2008 - 20:04:15 MDT


Something like Pascal's Mugging (
http://www.overcomingbias.com/2007/10/pascals-mugging.html ) is a
problem even without someone trying to mug you. It may be that there
is some way (say, a magic button hidden in the middle of the Bo÷tes
Void, hence the title) to allow 3^^^^3 (or any other very large but
low-algorithmic-complexity number) times as many happy posthumans to
exist than could ever exist in the universe without 'magic' - or, more
generally, for most utility functions there may exist 'magic' to
create much much more utility than would otherwise be possible. This
is extremely unlikely, but it seems doubtful that it would be unlikely
enough to have lower expected utility than the default course. This
suggests that devoting massive resources to searching for magic is a
convergent subgoal for any utility maximizer whose utility function
admits of Really Big Numbers, and that even a Friendly AI, if it fit
that description, would do so, possibly even to the exclusion of
supporting existing humans, or at least diverting a majority of the
resources that could go to existing humans.

So is this really the Friendly thing to do? The resolution of Pascal's
Mugging, on OB, was that "states with many people hurt have a low
correlation with what any random person claims to be able to effect"
(Robin Hanson's words); this doesn't seem to apply because there is no
mugger, the FAI itself is presumably in a different observer class
than (post)humans, and the 'magic' might take the form of creating a
relatively small number of extremely valuable posthumans. Because
there is no mugger, arguments about creating incentives for others to
mug you don't work.

If the Friendly utility function is bounded, that would very likely
solve the problem. This violently disagrees with my ethical intuition,
but I now take it much more seriously than I did before. Ignoring
miniscule probabilities would also solve the problem, but throws
rational consequentalism out the window. Is there some other reason
this isn't the Friendly thing to do, or do I just think it's wrong
because I don't want to die or be restricted because of a bet on odds
too long to comprehend?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT