Sysop yet again Re: New website: The Simulation Argument

From: Brian Atkins (brian@posthuman.com)
Date: Thu Dec 06 2001 - 23:54:58 MST


I know at this point Eliezer is beating his head against the wall, but
I can't resist...

Jeff Bone wrote:
>
> Gordon Worley wrote:
>
> > Computationally, this is too expensive for a practical Sysop to work.
> > The Sysop need not know the state of everything, but be there when
> > needed.
>
> And my argument is that in order to determine when and where the Sysop will
> be needed, some agent --- whether the Sysop itself or the environment is
> unimportant --- is going to need a predictive ability that allows it to
> prevent unwanted harm or death to individuals. This will be, for example,
> *far* more complex than forecasting local weather patterns in detail with
> any accuracy. Indeed, consider weather: a Sysop that in fact performs as
> Eli suggests it should (i.e., ensures --- absolutely, reliably --- that
> involuntary harm or damage cannot impact a protected individual under the
> Sysop's care) will either need to be able to control the weather (which
> implies deep simulation ability in that area) or be able to proactively
> protect particular individuals and their goods when threatened by bad
> weather. Neither of these may be possible: the first may not be possibly
> due to the complex dynamics of physical phenomenon like weather, the latter
> may not be possible simply because it may be physically impossible to
> protect something against the forces seen in certain disastrous weather
> conditions. Bad weather, earthquakes, etc. would indeed require extremely
> fine-grained simulation of the time evolution of large interrelated systems
> of effects, to avoid the *eventual* impact on the constituency.

Your scenario is very unlikely, but at any rate should a no-win situation
become apparent the Sysop likely would simply inform the affected individuals
of the situation and the lack of absolute safety, and let them decide what
they want to do. If they then choose to stay in the danger then they are
now volutarily choosing to risk death. So if they die the Sysop has not
failed in its task to prevent involuntary death. The Sysop cannot defeat
the laws of physics, but it can at least keep you informed and provide
alternatives.

>
> I agree that this is all, most likely, computationally impractical --- not
> particularly because of lack of computation ability (let's assume
> computronium) but because of the potential physical limitations to
> simulating the real world in sufficient detail to provide absolute
> guarantees of safety. IMO, the only thing a practical Sysop will be able
> to do is guarantee best-effort protection and safety, and *that* might not
> be worth the risks involved.

Just to clarify, SIAI is not about building a Sysop. We are trying to
build a seed mind that has low risk of unFriendliness, but what it chooses
to develop into is up to it, and if stays Friendly then it will not
choose to develop into something that is "not worth the risks". Your
other choice BTW is to wait around until some other organization lets
loose either by design or accident some sort of higher-risk A-mind.
The "let's just don't ever build an A-mind" choice is almost certainly
a fantasy so it is not up for consideration.

>
> BTW, this all assumes that some part of the constituency "remains"
> interested in being physically present in the physical world. If everybody
> uploads --- *everybody* --- then this isn't as big a problem, though the
> Sysop must still be concerned with the physical safety of whatever
> substrate it runs on.

I think you face potential disasters from external events no matter what
substrate so I don't see that it matters much. I find your earthquake
example to be very unconvincing... I was just reading in New Scientist
a couple weeks ago about a new prediction theory for earthquakes and
other sudden events so I think a superintelligence will be able to find
a way to predict these events, or even if it has to simulate the whole
damn planet it can do that too quite easily, in fact it could probably
use a very small chunk of the Earth for the needed hardware assuming
computronium really is the best way to compute. Of course why bother
when it probably has the technology to completely eliminate earthquakes.

Unforseen surprises from outside the solar system seem like the only
real threats, but perhaps you have some other ideas.

>
> Logic, common sense, and actuarial reasoning should tell us that that
> *absolute* safety is an impossibility, and my gut tells me that attempting
> to task some Power with providing it is a recipe for disaster.
>

Personally I don't see how preventing 99.999% of bad stuff is an
unworthy goal. A Sysop is not about providing perfect safety, it is
about creating the most perfect universe possible while still within
the physical laws we are all apparently stuck with. Even if it turns
out it can only prevent 10% of the bad stuff then that is worth
doing- why wouldn't it be?

P.S. I reiterate no one is tasking a Power to do anything. Powers decide
for themselves what they want to do :-)

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT