From: Peter de Blanc (email@example.com)
Date: Tue Jan 29 2008 - 23:14:05 MST
On Tue, 2008-01-29 at 23:41 -0500, Thomas McCabe wrote:
> Did you miss the quote from "Technical Explanation"? Essentially, the
> intuitive notion of how 'plausible' something is doesn't correspond
> well to an actual probability distribution. Because we have no
> knowledge whatsoever about the rules governing the simulation (other
> than the ones we can observe directly), to estimate the probability of
> a rule, you need to use Solomonoff induction or some approximation to
> it. If someone did math saying "Hey, a set of rules which leads to our
> imminent doom has much less complexity than a set of rules which lets
> us keep going", I'd be willing to revisit the simulation argument. As
> it is, I seriously doubt this; throwing in an additional conditional
> (if: we go through the Singularity, then: shut down the simulation)
> seems likely to add complexity, not remove it. The reverse conditional
> (if: we don't go through the Singularity, then: shut down the
> simulation) is simply a negation of the first one, so it seems likely
> to have similar complexity. "Seems likely" is obviously an imprecise
> statement; anyone have any numbers?
> - Tom
Do you apply such strict standards to all reasoning? "Show me the
Solomonoff Induction or shut up"?
The complexity of a simulation has implications for how likely it is to
be run in the first place, but resource requirements are also relevant
and you're ignoring those.
My reasoning was that a big simulation is harder to run than a small
simulation. If a simulation grows bigger, then you're less likely to be
able to continue to run it. If the people running a simulation only have
K bits available, then once the simulation requires more than K bits to
continue running, they have to shut it off.
Now I don't see this as a reason not to build an FAI, because the FAI
should be able to do this sort of reasoning better than humans anyway,
and without needing a ridiculous amount of computing power. It might
place an upper bound on the size of the singularity, though.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT