Re: Sysop yet again Re: New website: The Simulation Argument

From: Brian Atkins (brian@posthuman.com)
Date: Fri Dec 07 2001 - 23:46:56 MST


Jeff Bone wrote:
>
> Brian Atkins wrote:
>
> > Your scenario is very unlikely,
>
> What, exactly, is unlikely about bad weather or earthquakes or any of a large

Nothing. Superintelligences being unable to stop or predict earthquakes
is very unlikely. Which is what I said.

>
> The real question is the relationship between bits and atoms. The less future
> civilizations rely on atoms --- the more bit-based we are --- the less we need
> consider physical actuarial threats. I find the Sysop idea rather amusing in
> many ways; the name of this list refers to the notion of future shock, but IMO
> there's a built-in amount of shock in assuming that we will prefer to interact
> with the physical universe to the extent we are forced to do so today; also that
> we remain individuals, that our value system will place the concept of risk
> elimination over that of those things we will have to give up to achieve it,
> etc. Also, to what extent does the concept of physical death really matter
> assuming the possibility of backups, etc? I.e., all the concerns that we suppose
> a Friendly Sysop would have are built on a value system and set of assumptions
> that we Unascended have today. There's no way to know how all that will appear
> to Posthuman perspective, but I'd wager that they're going to find it all quaint
> and amusing.

Well Jeff maybe you'll be the one to figure out how to warp into the
Universe next door and keep it all to yourself :-) But the Sysop Scenario
comes from the idea that at least for a while we are going to be stuck
running on computronium or worse. We will not "prefer" that, it simply
is expected (barring magic physics) to be what we end up with. So I have
to shoot that remark of yours down. I will also shoot down your death
vs. copies remark since no matter how many copies you have floating
around the solar system, if all the atoms get taken over by a Blight you
will reach a state of "complete" death. There are things that even SIs
probably have to worry about. If you can accept that then you can accept
that some form of Sysop /may/ be needed even in that future time. But at
any rate, the whole Sysop thing is not exactly central here.

What is much more important is /getting there/ in the first place. So
I have to agree with Gordon you seem to be stuck on something that has
little importance to pre-Singularity goings-on. The facts are that
sentiences of here-and-now do have certain things they want kept safe,
and ways must be found to accomplish this, preferrably with the least
amount of risk. It is quite possible as I said that the Transition
Guide, as it matures, decides that a Sysop is the wrong way to go and
it goes off and does something completely different. Friendliness is
not about expecting any kind of certain outcome other than the one that
is logically and rationally best for everyone, based upon what they want.

>
> > but at any rate should a no-win situation
> > become apparent the Sysop likely would simply inform the affected individuals
> > of the situation and the lack of absolute safety, and let them decide what
> > they want to do.
>
> Well, that's fine, but doing so with ample time for the potentially effected
> individual ample lead time to avoid certain classes of disaster will require
> fine-grained simulation at a significant level. And there're physical limits to
> the possible accuracy of simulation.

I don't see what you're saying. In the case of earthquakes for instance
the Sysop would already know they exist. So it likely would immediately
upon noticing that "earthquakes exist, and I have no way to predict or stop
them" begin notifying the people and providing any alternatives it had
to them. Like I said, if they decide to stay around it's their own fault
when something bad happens.

The only other class of no-wins are surprise situations like say a near-
light-speed black hole comes zooming into the solar system. Well as soon
as the Sysop's external sensors pick it up it would let everyone know to
clear out of the area.

This kind of thing would of course not be perfect safety, it is simply
the best possible under physical limits. Still, can't beat that.

>
> > Just to clarify, SIAI is not about building a Sysop. We are trying to
> > build a seed mind that has low risk of unFriendliness, but what it chooses
> > to develop into is up to it, and if stays Friendly then it will not
> > choose to develop into something that is "not worth the risks".
>
> So you say. IMO, that appears tautological.

That latter part of my statement appears tautological, but the idea that
an AI can be designed such that it will stay Friendly is not.

>
> > Your
> > other choice BTW is to wait around until some other organization lets
> > loose either by design or accident some sort of higher-risk A-mind.
>
> Hey, despite the criticism --- which is intended to be constructive --- I'm
> rooting for you guys. That is, UNTIL somebody comes along with a better plan
> that's less tightly tautological and more pragmatic. ;-)

I don't think we've heard any criticism from you yet regarding either
CFAI or GISAI. If you have comments about the feasibility of either then
by all means let's drop the Sysop thread and get to the meat.

> > I find your earthquake
> > example to be very unconvincing... I was just reading in New Scientist
> > a couple weeks ago about a new prediction theory for earthquakes and
> > other sudden events so I think a superintelligence will be able to find
> > a way to predict these events, or even if it has to simulate the whole
> > damn planet it can do that too quite easily, in fact it could probably
> > use a very small chunk of the Earth for the needed hardware assuming
> > computronium really is the best way to compute. Of course why bother
> > when it probably has the technology to completely eliminate earthquakes.
>
> Okay, forget about earthquakes. What about interstellar neighborhood stellar
> disaster chains? Etc. etc. etc. The risk to perpetual existance across the
> board approaches unity; attempting to build a rational agent who has as a goal
> elimination of involuntary risk assumption is a quixotic and perhaps irrational
> task. OTOH, very different proposition if the goal is minimization of risk and
> the scope of such activity is constrained.

The only /real/ problem I see is heat death. By replicating myself and mailing
off copies to different areas of the Universe I can avoid any kind of
problems with random nasty events. Plus you can send out nano space probes
to map each and every nasty little bit out there so you can predict it
all 10 billion years in advance. It's really not that difficult. The real
risks come from within, from unFriendly SIs. Depending on what the physics
are like, you might end up with an intelligence arms race where if you
don't keep up with Jones' garage of 3 jupiter brains then your nanodefenses
become no match for theirs and you get eaten. Or something like that. I
would personally prefer that rather than force everyone in the Universe
be constantly maxed out in intelligence and defenses, we could instead
develop some system to notice when someone goes rogue and have the Sysop
take them out of circulation before they are a threat to anyone. But like
I said who can really say how it all will turn out.

>
> > Personally I don't see how preventing 99.999% of bad stuff is an
> > unworthy goal.
>
> It's not --- it's an extremely noble goal. The question isn't the goal, it's the
> practicality of the path. "The road to Hell is paved with good intentions" and
> all that. My perspective is that pursuing this coarse is very desirable given
> the alternatives, but IMO we should be careful to be realistic and not pollyannas
> about it.

Right, well like I said, I trust a Friendly SI to be able to figure out
pretty easily whether it is practical or not. The Sysop thing is just a
thought experiment that is rather unlikely to be exactly how it actually
turns out.

>
> > A Sysop is not about providing perfect safety, it is
> > about creating the most perfect universe possible
>
> The crux of my issue is this: "most perfect universe" is underdefined, and
> indeed perhaps undefinable in any universally mutually agreeable fashion.

It's on a person by person basis with the Sysop breaking ties :-) That's
my story, and I'm sticking to it :-)

>
> > while still within
> > the physical laws we are all apparently stuck with. Even if it turns
> > out it can only prevent 10% of the bad stuff then that is worth
> > doing- why wouldn't it be?
>
> There's always a trade-off between safety and liberty. Consider how security
> fears over 9-11 are impacting civil liberties already. One of my biggest fears
> re: the social impact of accelerating technology isn't that a Power takes over
> --- which IMO is unavoidable, really --- but that fear and security concerns
> trump concerns over progress. Friendliness IMO seems largely to be about making
> the world a safe place --- but "safe" is a subjective value judgement, and IMO it
> may be dangerous to task (or even just ask) a Power to provide it.

I think your claim that there's always a tradeoff is wrong. For instance I
can inject myself with some nanobots that prevent all kinds of internal
diseases which would increase my safety without hurting my liberty one
iota- in fact it would likely increase my liberty because I could then
eat more bad food :-)

Friendliness also BTW is not necessarily about making the world a safe
place. As I said, it is a completely different topic and aim from the
Sysop discussion. Friendliness is strictly about how do you build an AI
that will be "nice". If all AI designers build their AIs this way then
yes it will make the world safer than it would have been otherwise, but
in terms of directly increasing the safety of people's lives past that
you are now talking about individual decisions being made by the FAI.
For all we know the FAI may determine that the best thing to do would
be to fly off the planet and never talk to us again, although of course
that looks unlikely. It will likely decide that making the Universe safer
is a Good Thing, but who knows how it will accomplish that? It may
decide that the best way is to upgrade everyone to superintelligence
so they can perceive the objective morality it has discovered. Or it
may decide a Sysop is needed. Whatever it decides you can be sure it
will make the decision only after fully understanding what you want,
because what /it/ will want is to help you.

You read this, right? http://www.intelligence.org/CFAI/info/indexfaq.html#q_1

>
> > P.S. I reiterate no one is tasking a Power to do anything. Powers decide
> > for themselves what they want to do :-)
>
> You'd better believe it, buddy! I am in total agreement with this statement.
> The fundamental failure of Friendliness (as I understand it, based on a few
> reads) is that it attempts to constrain such a Power, and tautologically brushes
> off any arguments for why such constraints might or might not be desirable,
> achievable, etc.
>
> Still, like I said, it's the best shot we've got today, so my criticism should be
> taken exactly as it's intended --- constructively, informatively.
>

Well we can't respond unless you want to point out specific instances of
the alleged failures.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT