**From:** Vladimir Nesov (*robotact@mail.ru*)

**Date:** Fri Aug 31 2007 - 11:22:44 MDT

**Next message:**Mike Dougherty: "Re: Scenario for early hard takeoff"**Previous message:**Matt Mahoney: "Scenario for early hard takeoff"**In reply to:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Next in thread:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Reply:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Tuesday, August 28, 2007, Stathis Papaioannou wrote:

SP> By TM enumerator I take it you mean a program that enumerates all

SP> possible programs, like a universal dovetailer. In the sense I have

SP> described, then yes, all the other simulations are irrelevant.

SP> However, where there are multiple competing futures (as below) the

SP> weighting of each one matters. There are theories in which it is

SP> assumed that the universe is the set of all possible programs (which

SP> perhaps need only exist as Platonic objects), but I don't know if it

SP> has been successfully shown that this idea yields the known laws of

SP> physics.

It yields all laws of physics, including ours, as long as they are

computable. (It doesn't seem possible to ever prove as a result of

observations that some laws of physics are not computable. Observations

are finite. When a decision is drawn by experts, it's equivalent to

experts' minds being is a particular configuration set, which is also

a finite thing.)

*>> SP> However, if there are two or more competing "next moments" then the
*

*>> SP> number of simulations is relevant. If there are X simulations in which
*

*>> SP> you are tortured and Y simulations in which you are not tortured in
*

*>> SP> the next moment, then you have a X/(X+Y) chance of being tortured.
*

I think I found a better argument about this point. Certainly one

tries to anticipate the future, but this behaviour is grounded in

anticipation of _future experience_. And future experience itself

does not depend on number of times it's simulated.

When you use probability theory to make rational choices, you do it

only because you anticipate that they will pay off in your future

experience, in dominating bulk of possible futures. Still, you usually

sacrifice those possible futures where fate plays against you.

*>> To the point of my previous message: how do you count simulations?
*

*>> What is your solution to that counting paradox I wrote about in previous message?
*

*>> Does a presence of 2^N simulations within a single implementation
*

*>> somehow influence probability of you being in certain simulation?
*

SP> If I understand you correctly, you are suggesting that doubling the

SP> number of implementations in a recursive simulation will increase the

SP> total number of entities in that implementation not by a factor of 2,

SP> but by a factor of 2^N, with N being the number of levels. I don't see

SP> why this shouldn't also increase the weighting by 2^N for the purpose

SP> of probability calculations, although this does provide a possible

SP> experimental method to determine if we are in a recursive simulation.

That's not what I meant, but details don't really matter. This

counting issue raises just another serious problem of simulations.

What really counts as simulation of certain mathematical model of

simulated universe? Any implementation arranges matter of host

universe in certain patterns. Why some patterns are said to provide

simulations and not others? Matter of host universe has no direct

correspondence to 'matter' of simulated universe. To establish that

implementation X (particular pattern of matter in host universe) is

a simulation of universe model Y (mathematical description), one needs

an interpretation procedure F that can take X as an input, convert it to

the same mathemetical notation and compare to Y, F(X)=Y. Presence of this

procedure (which nobody needs to actually build in order to simulation

to be a genuine one) is somehow implied, if X is developed to

implement Y. But how complex is F allowed to be? If it doesn't need to

be implemented, can't it include whole simulation, so that X is nil

and F(nil)=Y?

As a simple example, say, state of simulated universe is a finite 2D binary

image, of size AxB. When is it considered simulated? If a program

stores this state in computer memory and performs computation that

modifies it every simulated tick according to simulation's laws of physics, and

outputs the image to a monitor screen, it seems to simulate that

universe. But will it cease to simulate it if I turn monitor off?

Will it simulate it twice if I install two monitors in parallel?

It's only meaningful to say that implementation provides a way to

access information about simulated universe.

I'm mainly interested in this issue because I have doubts about

uploads not being p-zombies. These handy-wavy theories of simulated

experience are full of paradoxes. I agree that one can't in principle

prove that given observed entity has conscience, but at least there

should be a consistent theory of what conscience is. In this case, I

take a universe containing a conscious observer as a consciousness

vessel, so that genuine simulation corresponds to implementation of

consciousness.

-- Vladimir Nesov mailto:robotact@mail.ru

**Next message:**Mike Dougherty: "Re: Scenario for early hard takeoff"**Previous message:**Matt Mahoney: "Scenario for early hard takeoff"**In reply to:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Next in thread:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Reply:**Stathis Papaioannou: "Re: ESSAY: How to deter a rogue AI by using your first-mover advantage"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:58 MDT
*