Cumulative probability is too small (was Re: ESSAY: How to deter a rogue AI ...)

From: Tim Freeman (
Date: Wed Aug 22 2007 - 21:24:26 MDT

From: "rolf nelson" <>
>If my RAI* moves ahead and attempts to destroy mankind, I will punish
>it by pulling the plug such that RAI* will never calculate C. ...
>Therefore, it *may* be deterred by my threat.
>(This probably depends on what philosophical beliefs it implicitly
>holds about probability.)

Right. One reasonable philosophical belief is the universal prior.
(See, except most
people here already seem to know about it.)

If the RAI assumes the universal prior, and it takes at least 1000
bits to transform an explanation of the world that says "these are the
laws of physics" into one that says "these are the laws of physics
because this simulator run by people with these motives computes it",
then the RIA will estimate the probablity of the simulation scenario
to be one in 2 ** 1000.

I don't think you'll get deterrence to happen in this scenario, unless
the cost to the AI of preserving mankind is essentially zero.

Is there a reasonable a-priori distribution that gives nonnegligible
probabilities to simulation scenarios?

Tim Freeman      

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT