Re: More MWI implications: Altruism and the 'Quantum Insurance Policy'

From: Edmund Schaefer (edmund.schaefer@gmail.com)
Date: Sun Dec 12 2004 - 21:02:42 MST


On Sun, 12 Dec 2004 18:24:01 +1300 (NZDT), Marc Geddes
<marc_geddes@yahoo.co.nz> wrote:

> Thinking about MWI of QM, it occurred to me that a
> true altruist needs to consider the well being of
> sentients in all the alternative QM branches, not just
> this particular branch.

Acting to maximize the probability of an event produces the same
actions as acting to maximize the number of descendant quantum
realities with the desired event. I believe this has been covered
before on this list.
 
> For instance
> suppose Eliezer was hit by a truck walking to work.
> Suppose he'd been linking the decision about which
> route to walk to work to a 'quantum coin flip'. Then
> half the alternative versions of himself would have
> taken another route to work and avoided the truck. So
> in 50% of QM branches he'd live on. Compare that to
> the case where Eli's decision about which route to
> walk to work was being made mostly according to
> classical physics. If something bad happened to him
> he'd be dead in say 99% of QM branches. The effect of
> the quantum decision making is to re-distribute risk
> across the multiverse. Therefore the altruist
> strategy has to be to deploy the 'quantum decisions'
> scheme to break the classical physics symmetry across
> the multiverse.

This only works because our fictional Eli assigned a 99% probability
to the lethal path being more desirable. Your "insurance policy" boils
down to the following piece of advice: If you make a decision that
you're really sure about, and happen to be wrong, you're better off
flipping a coin. Sure, that's sound advice, but it doesn't do me any
good. If I knew I was wrong, it wouldn't be very sane of me to keep
that 99% estimate of desirability. You started with a *really* bad
decision, scrapped the decision in favor of a fifty-fifty method, saw
that it drastically improved the survival of your quantum descendants,
and said "behold the life-saving power of randomness". Sorry, but it
doesn't work like that. Intelligence works better than flipping coins.
If you trust coin flips instead of intelligence, you're more likely to
get killed. Applying this to MWI translates it to "If you go with
intelligence, you survive in a greater number of quantum realities."
 
> In fact the scheme can be used to redistribute the
> risk of Unfriendly A.I across the multiverse. There
> is a certain probability that leading A.I researchers
> will screw up and create Unfriendly A.I. Again, if
> the human brain is largely operating off classical
> physics, a dumb decision by an A.I researcher in this
> QM branch is largely correlated with the same dumb
> decision by alternative versions of that researcher in
> all the QM branches divergent from that time on. As
> an example: Let's say Ben Goertzel screwed up and
> created and Unfriendly A.I because of a dumb decision.
> The same thing happens in most of the alternative
> branches if his decisions were caused by classical
> physics! But suppose Ben had been deploying my
> 'quantum insurance scheme', whereby he had been basing
> some of his daily decisions off quantum random
> numbers. Then there would be more variation in the
> alternative versions of Ben across the Multiverse. At
> least some versions of Ben would be less likely to
> make that dumb decision, and there would be an assured
> minimum percentage of QM branches avoiding Unfriendly
> A.I.

And if he doesn't screw up the AI? What if Ben was right? Your
insurance scheme just killed half of that branch of the multiverse
because a lot of Bens decided to go on coin flips instead of a correct
theory, and I don't see why the second batch of fifty-gazillion
sentients is less valuable than the first batch. Also, keep in mind,
there's going to be some that hit the ultimately desirable state.
Somewhere out there there's a quantum reality where Friendly AI
spontaneously materialized out of a gas cloud. You can't really drive
the number of desirable quantum realities down to zero, any more than
you can accurately assign something a Bayesian probability of zero.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT