Re: Infinite universe

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 29 2003 - 13:49:28 MDT


Gary F. York wrote:
>
> I'm unpersuaded for reasons I'll get to in a moment. I should confess that I
> most particularly don't _want_ to be persuaded. Seems to me the implication is
> utterly horrible: every evil, monstrous act that could conceivably be
> perpetrated must have happened -- somewhere. For this to be true, every exact
> doppelganger of me which exists at this time, N, there must be an infinite
> number who 'choose' to do some completely irrational, horrible thing at time N +
> 1.

Yes, the trouble with Tegmark is that it can drive people insane unless
they understand how decision theory transfers over into very large or
infinite universes.

You cannot, of course, cause evil, monstrous acts to stop existing.
Everything exists. However, you can try to give evil, monstrous acts a
very low measure - decrease their subjective probability in the futures of
most sentients.

> The whole point of designing a 'friendly' AI is based on the presumption that
> it's possible to succeed -- which means that the AI, designed properly in the
> first instance, has zero chance of becoming unfriendly. At a lower level, if I
> create a program to print out the integers, there are some ways I could err, but
> it has zero chance of printing the works of Shakespeare instead.

Zero chance? No, not at all, unless you take a tautological definition of
"designed properly", and in a quantum universe even that isn't possible.

> Even if I'm utterly wet in this instance, surely one among you can propose
> something plausible that leads neither to horror nor unimaginably tedious
> redundancy.

What matters is not whether something "exists" or "does not exist" but its
subjective conditional probability.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT