Re: Sysop hacking

From: polysync@pobox.com
Date: Thu Feb 14 2002 - 15:40:11 MST


From: "Eliezer S. Yudkowsky"
>> I assume some will still be around to upload, and not necessarily to the
>> same Sysop you decide to visit.
> A "Sysop" is universal within human space [...]

 I've thought about this for the past week, and I'm no closer to liking it or
thinking it's a good idea. I had a rambling reply, but I realized most of my
points were based on several assumptions, so I decided to just state them in
case they are easily dispelled. Except for the two+ minds thing (the resource
limits), I feel that these assumptions, along with Murphy's Law, will survive
into and through a singularity.

 - Unsolvable problems aside, even with near-infinite intelligence not
everything will be deduced ahead of time. There will still be a lot of
exploring of the state-space of technology and its implementation.
 - For some problems, two+ minds working together produce better/cleaner/
faster/more solutions than the two+ minds working alone. (This is based on my
assumption that at any one moment there is an upper limit on the number of
resources that can be constructively committed to a single mind.)
 - Many problems can be solved quicker with directed experimentation, or even
exhaustive testing, than with the pure application of intelligence.
 - You can't always tell which problems are which ahead of time.
 - Without 100% knowledge of the universe, there will be problems that can't be
solved, or even predicted through simulation.
 - For any given intelligence level, there are good plans that will just fail,
for reasons you could not have foreseen.
 - We might explore the wrong paths, and commit to things that fail.
 - Some experiments and tests might be unpredictably disastrous or fatal.
 - Sometimes action is required, whether a correct solution or knowledge is on
hand or not.

From: "Arona Ndiaye"
> In the OpenSource world, securing a box or software is not that impossible.

 - Look at the latest CERT alert for SNMP, it's hit many many vendors, and
there are some heavy consequences possible. According to one vendor, everyone
was vulnerable because they all integrated some common (open) code from one of
the earliest SNMP implementations. (No, I don't think an SI would fall for
this. Even human intelligence shouldn't have. Use it as a model of finding a
flaw in your foundations.) But:
 - In general when I look at open source, I see a bunch of peers coming
together on their own, openly sharing ideas, with no strict (rigid) hierarchy
of control or management. Experience and knowledge show and are usually
respected. Peer review is important. If you or your friends don't like
something or the direction things are going, you're free to go off and
implement your own ideas, toss them up, and see if they stand on their own. If
it's better, people might adopt it. Multiple ideas are active at once,
competing with each other. You usually get more ideas out of the group than
you put in.

 With all of this in mind, the Sysop and singleton-AI look fragile. A
commitment of everything to a single solution, where one failure could lead to
a total loss.

From: "Eliezer S. Yudkowsky"
> Incidentally, a Sysop is not universal "by definition" but because it
> descends from, or was constructed by, a singleton superintelligence with no
> local competitors [...]

 For the reasons above I don't think that a Friendly AI would arrive at the
singleton Sysop scenario outlined in the FAQ and Eliezer's write up. I'll
admit one ready exception - the Sysop could turn out to be the lesser of
several evils, at a time when "action is required, whether a correct solution
or knowledge is on hand or not."

>From the FAQ:
> In order to do its job the Sysop would probably have to be the most powerful
> entity in Sysop Space
....[on to a different topic]....
> unless its understanding of Friendliness were considerably different than the
> general ethical consensus of today

 Would the sysop be more powerful than any two entities combined? Any billion?
All of them? Independent of the subject, if 95% of the Sysop's constituents
said "Sysop, we think you and the other 5% are wrong" would the Sysop back
down?

> Egan never did quite explain why no polis ever made war on any other

 If I were the author trying to retrofit an excuse, I would say that the same
failings that caused the people to turn inward and start ignoring the outer
universe also had the side effect of ending war-like conflicts. They didn't
really care enough. It took a person newly created by the sysop to break them
out of their inward spiral.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT