Re: Singularity Objections

From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Jan 30 2008 - 14:02:39 MST


On Jan 30, 2008 1:14 AM, Peter de Blanc <peter@spaceandgames.com> wrote:
> On Tue, 2008-01-29 at 23:41 -0500, Thomas McCabe wrote:
>
> > Did you miss the quote from "Technical Explanation"? Essentially, the
> > intuitive notion of how 'plausible' something is doesn't correspond
> > well to an actual probability distribution. Because we have no
> > knowledge whatsoever about the rules governing the simulation (other
> > than the ones we can observe directly), to estimate the probability of
> > a rule, you need to use Solomonoff induction or some approximation to
> > it. If someone did math saying "Hey, a set of rules which leads to our
> > imminent doom has much less complexity than a set of rules which lets
> > us keep going", I'd be willing to revisit the simulation argument. As
> > it is, I seriously doubt this; throwing in an additional conditional
> > (if: we go through the Singularity, then: shut down the simulation)
> > seems likely to add complexity, not remove it. The reverse conditional
> > (if: we don't go through the Singularity, then: shut down the
> > simulation) is simply a negation of the first one, so it seems likely
> > to have similar complexity. "Seems likely" is obviously an imprecise
> > statement; anyone have any numbers?
> >
> > - Tom
>
> Do you apply such strict standards to all reasoning? "Show me the
> Solomonoff Induction or shut up"?

All reasoning which deals with nontestable theories and nonobservable
entities, yes. This is the same standard of proof I would require for,
eg., deism. This goes double when we're dealing with the fate of
humanity.

> The complexity of a simulation has implications for how likely it is to
> be run in the first place, but resource requirements are also relevant
> and you're ignoring those.

Solomonoff induction ignores "resource requirements" in the
conventional sense. In Turing-machine land, the only limiting resource
is number of computational steps, which we don't care about because we
never see them directly. If it took a googol steps to move through one
nanosecond, we'd never know.

> My reasoning was that a big simulation is harder to run than a small
> simulation. If a simulation grows bigger, then you're less likely to be
> able to continue to run it. If the people running a simulation only have
> K bits available, then once the simulation requires more than K bits to
> continue running, they have to shut it off.

People running the simulation? As in, intelligent thinkers? That's a
heck of a lot more complicated, and therefore much more unlikely, than
assuming the simulation is governed by a simple set of Turing-style
computational rules.

If we assume an atom-by-atom simulation, the computer doesn't care in
what order the atoms are arranged, they still require the same number
of bits to represent their quantum states. If we assume that only the
humans are simulated atom-by-atom and everything else is cheaply
computed (unless we look at it), that's a whole bunch of additional
complexity which needs to be taken into account.

It doesn't require very much additional complexity before we can
safely ignore the possibility. If the set of rules saying "we get
killed" is fifty bits longer (in Kolmogorov terms) than the set of
rules saying "we don't get killed", the odds are around
1,000,000,000,000,000:1 against. Considering that the chance of us
getting killed by a programming error is probably more like 10%, we
really should require a rigorous (or at least semi-rigorous) argument
before we divert resources.

> Now I don't see this as a reason not to build an FAI, because the FAI
> should be able to do this sort of reasoning better than humans anyway,
> and without needing a ridiculous amount of computing power. It might
> place an upper bound on the size of the singularity, though.
>
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT