Re: Sysop yet again Re: New website: The Simulation Argument

From: Jeff Bone (jbone@jump.net)
Date: Fri Dec 07 2001 - 13:30:15 MST


Brian Atkins wrote:

> Your scenario is very unlikely,

What, exactly, is unlikely about bad weather or earthquakes or any of a large
number of other actuarial threats? To put this in context: I saw an analysis
(done by an actual actuarial ;-) some time ago which concluded that even if
disease, old age, and intentional violence were eliminated as causes of human
death, the average lifespan of a human being would still only be approximately
600 years. That is, the mortality rate at about 1200 years approaches 100% due
to the year-by-year likelihood of dying in an "accident" (i.e., being physically
destroyed) given current accident rates. Assuming that we continue to exist
primarily physically, the maximum amount we can increase longevity that is
constrained by physics. At the boundaries, it's constrained by 2LT.

The real question is the relationship between bits and atoms. The less future
civilizations rely on atoms --- the more bit-based we are --- the less we need
consider physical actuarial threats. I find the Sysop idea rather amusing in
many ways; the name of this list refers to the notion of future shock, but IMO
there's a built-in amount of shock in assuming that we will prefer to interact
with the physical universe to the extent we are forced to do so today; also that
we remain individuals, that our value system will place the concept of risk
elimination over that of those things we will have to give up to achieve it,
etc. Also, to what extent does the concept of physical death really matter
assuming the possibility of backups, etc? I.e., all the concerns that we suppose
a Friendly Sysop would have are built on a value system and set of assumptions
that we Unascended have today. There's no way to know how all that will appear
to Posthuman perspective, but I'd wager that they're going to find it all quaint
and amusing.

> but at any rate should a no-win situation
> become apparent the Sysop likely would simply inform the affected individuals
> of the situation and the lack of absolute safety, and let them decide what
> they want to do.

Well, that's fine, but doing so with ample time for the potentially effected
individual ample lead time to avoid certain classes of disaster will require
fine-grained simulation at a significant level. And there're physical limits to
the possible accuracy of simulation.

> Just to clarify, SIAI is not about building a Sysop. We are trying to
> build a seed mind that has low risk of unFriendliness, but what it chooses
> to develop into is up to it, and if stays Friendly then it will not
> choose to develop into something that is "not worth the risks".

So you say. IMO, that appears tautological.

> Your
> other choice BTW is to wait around until some other organization lets
> loose either by design or accident some sort of higher-risk A-mind.

Hey, despite the criticism --- which is intended to be constructive --- I'm
rooting for you guys. That is, UNTIL somebody comes along with a better plan
that's less tightly tautological and more pragmatic. ;-)

> The "let's just don't ever build an A-mind" choice is almost certainly
> a fantasy so it is not up for consideration.

Not suggesting it, would never suggest it, IMO SAI can't happen fast enough. I'm
just suspicious both of the Friendliness argument itself (mechanically) and the
notion in general.

> I think you face potential disasters from external events no matter what
> substrate so I don't see that it matters much.

Yes, but if your required physical substrate is minimized in volume, mass, and
other physical characteristics then it is minimally exposed to actuarial risks.
Any civ running as a simulation on a "well protected" physical substrate is at
lower risk than, say, a planet-based civ. And choice of substrate (and physical
architecture) for running such a sim has significant impact on risks; a
world-size normal-matter singleton simulator running a civ is much more at-risk
from a number of different classes of disaster than a civ running on, say, a
distributed swarm of independent computational units made from dark matter. (I'm
not supposing anything about the practicality of either, just trying to
illustrate why substrate matters.)

> I find your earthquake
> example to be very unconvincing... I was just reading in New Scientist
> a couple weeks ago about a new prediction theory for earthquakes and
> other sudden events so I think a superintelligence will be able to find
> a way to predict these events, or even if it has to simulate the whole
> damn planet it can do that too quite easily, in fact it could probably
> use a very small chunk of the Earth for the needed hardware assuming
> computronium really is the best way to compute. Of course why bother
> when it probably has the technology to completely eliminate earthquakes.

Okay, forget about earthquakes. What about interstellar neighborhood stellar
disaster chains? Etc. etc. etc. The risk to perpetual existance across the
board approaches unity; attempting to build a rational agent who has as a goal
elimination of involuntary risk assumption is a quixotic and perhaps irrational
task. OTOH, very different proposition if the goal is minimization of risk and
the scope of such activity is constrained.

> Unforseen surprises from outside the solar system seem like the only
> real threats, but perhaps you have some other ideas.

Collapse of a metastable vacuum state triggered by local particle accelerator
experiments. Passing through a large interstellar cloud of dark matter.
Supernova in the neighborhood. Accidental creation of a black hole.
"Information clogging" of the universe due to the observation effects of
computronium on the quantum fabric. Etc. etc. etc. Most of these are probably
pure fantasy, but just as surely as that's true there are huge classes of risk we
haven't even considered.

> Personally I don't see how preventing 99.999% of bad stuff is an
> unworthy goal.

It's not --- it's an extremely noble goal. The question isn't the goal, it's the
practicality of the path. "The road to Hell is paved with good intentions" and
all that. My perspective is that pursuing this coarse is very desirable given
the alternatives, but IMO we should be careful to be realistic and not pollyannas
about it.

> A Sysop is not about providing perfect safety, it is
> about creating the most perfect universe possible

The crux of my issue is this: "most perfect universe" is underdefined, and
indeed perhaps undefinable in any universally mutually agreeable fashion.

> while still within
> the physical laws we are all apparently stuck with. Even if it turns
> out it can only prevent 10% of the bad stuff then that is worth
> doing- why wouldn't it be?

There's always a trade-off between safety and liberty. Consider how security
fears over 9-11 are impacting civil liberties already. One of my biggest fears
re: the social impact of accelerating technology isn't that a Power takes over
--- which IMO is unavoidable, really --- but that fear and security concerns
trump concerns over progress. Friendliness IMO seems largely to be about making
the world a safe place --- but "safe" is a subjective value judgement, and IMO it
may be dangerous to task (or even just ask) a Power to provide it.

> P.S. I reiterate no one is tasking a Power to do anything. Powers decide
> for themselves what they want to do :-)

You'd better believe it, buddy! I am in total agreement with this statement.
The fundamental failure of Friendliness (as I understand it, based on a few
reads) is that it attempts to constrain such a Power, and tautologically brushes
off any arguments for why such constraints might or might not be desirable,
achievable, etc.

Still, like I said, it's the best shot we've got today, so my criticism should be
taken exactly as it's intended --- constructively, informatively.

jb



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT