Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Aug 28 2007 - 05:54:25 MDT


On 28/08/07, Norman Noman <overturnedchair@gmail.com> wrote:

> > I think there would be more people interested in promoting their
> > religion or increasing their profits than would be interested in
> > making their descendants' future safe from a RAI. This might not be
> > rational or moral or whatever, but it's what people would.
>
> It doesn't matter what pre-singularity people want, only what the
> post-singularity entity or entities with the power to do the simulations
> wants. I find it very difficult to believe that post-singularity big tobacco
> and osama bin laden will even exist in any meaningful sense, let alone stay
> true to the wildly out of character schemes you suggest they will soon have.
>
> And even if you're RIGHT, and there is a pandemonium of human infighting via
> simulation which cancels out to nothing, there is no reason rolf's plan
> cannot be implemented as well.

It wouldn't do any harm to implement it, but to be consistent an agent
should take any threat via simulation from the next level up
seriously. If there is only one known such threat it is easy to decide
what you should do, but the moment it becomes clear that it is a cheap
strategy to gain advantage everyone will jump on the bandwagon.

> > > Are you playing the devil's advocate or do you really think it's even
> > > remotely likely that big tobacco would invest in a karmic simulation of
> the
> > > universe in order to get people to smoke?
> >
> > As you put it, everybody and his brother could join in, with the
> > result that the only rational action would be to ignore the
> > possibility of a simulation.
>
> So, your answer is YES?

Yes. Although, the more I think about it, the more it seems that
someone might actually try this, especially if the simulation argument
becomes commonly accepted as valid.

> If both parties run X simulations, your likelihood of being in one of A's
> simulations rather than B's simulations is proportional to the likelihood of
> A existing in the first place rather than B. As X goes to infinity, this
> likelihood does not change.

Right, but if there are many competing interests rather than just two,
it will be difficult to choose between them. You would have to try to
guess which organisation would be the most determined and the most
capable of running their proposed simulation in the long term.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT