Re: New website: The Simulation Argument

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Dec 03 2001 - 08:33:42 MST


Jeff Bone wrote:
>
> To be more clear: while Friendly scenarios are one "singleton" outcome that
> supports the simulation stats, strictly speaking simulation (second order) is
> apparently out-of-scope for Eli's def. of Friendly. (I would argue that Friendly
> is in some sense neccessarily a de novo or first-order simulation.) Regardless,
> even for loose defs of Friendly, it would seem that the majority of singleton
> outcomes are non-Friendly.

I don't see how this reasoning operates. Nick Bostrom is pointing out an
argument that we are currently living in a computer simulation; that some
post-Singularity civilizations will be such as to construct and run a very
large number of ancestor simulations. If there are, say, 1000 total
intelligent civilizations to our Universe so far, then it is only
necessary that the odds of a simulating PSC be 1/1000 or greater in order
for the number of simulated civilizations to vastly outnumber the handful
(1000) of real civilizations. In this sense, Bostrom's argument is fairly
insensitive to the exact probability of our current civilization ending up
as a simulating PSC, as long as our universe (*or* the real universe)
contains a large enough pool of Singularity-spawning civilizations, and as
long as there exists *some* probability - however slight - of a
Singularity permitting the creation of ancestral simulations.

I don't see how one can use this argument to reason about the specific
probability of a simulating PSC as the outcome of a particular case. That
probability is one of the Bayesian priors; an unknown Bayesian prior, but
a Bayesian prior nonetheless.

I also don't understand why you say that Friendly AI "is" a simulation,
why you say that Friendly SI is a singleton scenario that supports
simulation (I should think it would exclude it completely), or why you say
that the majority of singleton outcomes are non-Friendly. None of these
are required by Bostrom's argument.

Incidentally, I liked Emlyn O'Regan's reformulation of Bostrom's argument
(from the Extropian list): "The probability that you or your descendants
will ever run an ancestor-simulation is negligible, unless you are now
living in such a simulation."

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT