Re: ESSAY: Would a Strong AI reject the Simulation Argument?

From: Norman Noman (overturnedchair@gmail.com)
Date: Mon Aug 27 2007 - 21:00:23 MDT


On 8/27/07, rolf nelson <rolf.hrld.nelson@gmail.com> wrote:
>
> > > Scenario 5: if the UFAI believes in Nonlocal Causality, it may create
> > > a vast number of identical simulated copies of itself to increase the
> > > chance that it exists in the BAD world rather than in the GOOD world.
> > > This intuitively sounds like a stupid thing to do, and no human would
> > > ever do such a thing; but, without a Local Causality axiom, I wouldn't
> > > rule out this scenario.
> >
> > This doesn't work, because RAI* creates as many copies as RAI does.
> >
>
> RAI* doesn't have the resources to create the same # of copies as RAI.

Oh, you're right.

I guess the real problem is that RAI's simulations are redundant. It doesn't
matter if they calculate C or not, since RAI will anyway, and thus they
don't influence the final decision.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT