Re: Pascal's Button

From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Tue Apr 08 2008 - 07:11:34 MDT


On Mon, Apr 7, 2008 at 10:04 PM, Nick Tarleton <nickptar@gmail.com> wrote:
> So is this really the Friendly thing to do? The resolution of Pascal's
> Mugging, on OB, was that "states with many people hurt have a low
> correlation with what any random person claims to be able to effect"
> (Robin Hanson's words);

This seemed to be Yudkowsky's and Hanson's preferred resolution, if
that's what you mean. Note that such a resolution requires not just
acknowledging "states with many people hurt have a low correlation
with what any random person claims to be able to effect", but that an
anthropomorphic law exists that makes them correlate to a
theoretically unlimited number of decimal points. In the sense that
the odds of me affecting 3^^^3 other humans is around ~(1/3^^^3),
rounded to the nearest hundred orders of magnitude, I am
"axiomatically ineffectual" in this framework.

> this doesn't seem to apply because there is no
> mugger, the FAI itself is presumably in a different observer class
> than (post)humans, and the 'magic' might take the form of creating a
> relatively small number of extremely valuable posthumans.

Presumably, in the given resolution, because the FAI is caused by we
axiomatically-ineffectual humans 21st century humans, this means the
FAI is axiomatically ineffectual as well.

> If the Friendly utility function is bounded, that would very likely
> solve the problem. This violently disagrees with my ethical intuition,
> but I now take it much more seriously than I did before. Ignoring
> miniscule probabilities would also solve the problem, but throws
> rational consequentalism out the window. Is there some other reason
> this isn't the Friendly thing to do, or do I just think it's wrong
> because I don't want to die or be restricted because of a bet on odds
> too long to comprehend?

No, I think you understand the issues, the root problem is that our
desires are fundamentally incoherent; there is no simple solution that
will satisfy all of our moral intuitions. To say in this case,
"therefore we must bound our utility function" (or create a new
anthropomorphic rule, or discount tiny probabilities, or sentence
mankind to almost certain death to try to save 3^^^3 strangers) does
not *necessarily* follow, however; we create rules to describe and
accomplish the things we desire in life, warping our desires to reduce
our Kolmogorov complexity is not necessarily always the way to go.

It is definitely a topic that deserves a lot of thought.

-Rolf



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT