Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Stathis Papaioannou (
Date: Sun Aug 26 2007 - 07:50:08 MDT

On 26/08/07, Norman Noman <> wrote:
> > They might announce it as soon as they hear of Rolf Nelson's idea.
> I very much doubt that they WILL announce it, as soon as they hear of rolf's
> idea or at any point afterward. If you believe in god anyway, the idea of
> replacing divine judgment with a machine is blasphemous and accomplishes
> nothing. In any case, I imagine most people would get to about "ESSAY: How
> to deter a rogue AI by" before they stopped reading.

I guess it's also blasphemous for an organisation to rakes in billions
from the gullible faithful despite what it says in the Bible about
camels, wealth and the eye of a needle...

> > In fact, every special interest group could make a similar proposal,
> But they won't, and no one would take them seriously if they did.

Why not? Arguably there is more motivation to implement a plan that
gains you something today than one that might guarantee your
descendants' future. That's why it's so difficult to deter people from
burning up fossil fuels and polluting the environment, even if they do
believe that it is ultimately a bad thing.

> > having only to commit to run the simulation when computing resources
> > become cheap enough, which eventually they will.
> Hopefully by that point the computing resources will be regulated such that
> this sort of tomfoolery is not allowed. By the friendly post-singularity
> whatsit.
> The differences between your hypothetical situation and rolf's plan are:
> A. First mover advantage. When the RAI appears, the battle for the future
> has already been won, either by the RAI or by whatever is simulating it.
> Here in 2007, we're still FIGHTING the battle. All your situation really
> consists of is some fatuous imps saying WOULDN'T YOU RATHER BE ON THE

We might not still be fighting the battle, because we might be in a
simulation run by the God-schmucks (or whoever). You can't tell it's a
simulation, that's the point.

> B. Two-way interaction. In rolf's plan, the simulated and nonsimulated RAIs
> are in separate branches and thus the real RAI has no way to stop itself
> from being simulated.* In your thing, we can go to the house of whoever says
> JOIN ME OR DIE and punch them in the face.

You can do that, but if it turns out that you're in the simulation
you'll be sent to hell for your trouble.

> C. Goal conflict. Under your system, by cooperating we help doom a
> significant fraction of humanity to hell, which for many people would be
> regarded as Bad Thing. Under rolf's plan, all the AI loses by cooperating is
> the raw material humanity is composed of, which it barely cares about one
> way or the other.

By cooperating and trying to spread the good word you maximise the
chances that you and everyone else will go to heaven, at minimal cost
- sort of like Pascal's wager. Of course, the problem with Pascal's
wager is that there are so many actual and possible religions that you
may as well ignore all of them, unless you find good evidence that a
particular one is true - which translates to evidence that you are in
fact in a simulation.

Stathis Papaioannou

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT