Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Norman Noman (overturnedchair@gmail.com)
Date: Sun Aug 26 2007 - 05:14:42 MDT


> They might announce it as soon as they hear of Rolf Nelson's idea.

I very much doubt that they WILL announce it, as soon as they hear of rolf's
idea or at any point afterward. If you believe in god anyway, the idea of
replacing divine judgment with a machine is blasphemous and accomplishes
nothing. In any case, I imagine most people would get to about "ESSAY: How
to deter a rogue AI by" before they stopped reading.

In fact, every special interest group could make a similar proposal,

But they won't, and no one would take them seriously if they did.

having only to commit to run the simulation when computing resources
> become cheap enough, which eventually they will.

Hopefully by that point the computing resources will be regulated such that
this sort of tomfoolery is not allowed. By the friendly post-singularity
whatsit.

The differences between your hypothetical situation and rolf's plan are:

A. First mover advantage. When the RAI appears, the battle for the future
has already been won, either by the RAI or by whatever is simulating it.
Here in 2007, we're still FIGHTING the battle. All your situation really
consists of is some fatuous imps saying WOULDN'T YOU RATHER BE ON THE
WINNING SIDE?

B. Two-way interaction. In rolf's plan, the simulated and nonsimulated RAIs
are in separate branches and thus the real RAI has no way to stop itself
from being simulated.* In your thing, we can go to the house of whoever says
JOIN ME OR DIE and punch them in the face.

C. Goal conflict. Under your system, by cooperating we help doom a
significant fraction of humanity to hell, which for many people would be
regarded as Bad Thing. Under rolf's plan, all the AI loses by cooperating is
the raw material humanity is composed of, which it barely cares about one
way or the other.

And even if your situation was plausible, and everyone, simply everyone
built some kind of simulated karma trap, I doubt the influence they'd exert
would balance out by pulling equally in all directions. People certainly
have their disagreements, but they have agreements as well. If we were truly
at complete odds with each other, we'd never have have gotten to the point
of multicellular life.

*It's true that the RAI could simulate its simulators in an effort to deter
them, but it's unclear if this would be advantageous even if it didn't cost
anything and even if it would work. If the RAI's goal is to calculate C,
having a simulated counterpart with the handicap of having to save humanity
is actually a great deal. It's like an extra life.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT