Re: Envisioning sysop scenarios Re: Universal Uplift as an alternativeto the Sysop scenario

From: xgl (xli03@emory.edu)
Date: Sat Mar 24 2001 - 15:11:54 MST


        regarding the recent discussion of fsi/ufsi warfare ... (i'm about
200 messages behind in my inbox).

On Thu, 22 Mar 2001, James Higgins wrote:

>
> So, are you saying that the Friendly SI could get into a 'war' with an
> unFriendly one? How could an SI be perfectly Friendly and capable of
> violence?

        friendly in the context of the si goal system is merely a
label for something vastly more complex than a human concept. violence is
an anthropomorphic term. i guess we can attribute violence to si warfare
to the extent that when intoxicated, we might attribute cruelty to the
laws of physics.

> I find this very hard to believe. If nothing else, the Friendly
> SI would be limited in its actions (collateral damage, avoid excessively
> devastating attacks, etc). In a perfectly even match the individual with
> the least restrictions usually wins.
>

        hmmm ... the fsi's super-goal is something like "protect human
beings" ... while the ufsi's super-goal, in the very worst case, would be
"destroy human beings" ... seems like a pretty symmetrical situation to
me.

        of course, there might be a slight symmetry-breaking since human
beings are to a certain extent self-destructive ... or at least easily
perishable. however, to what extent this is significant to an si, is open
to speculation. moreover, it is pretty unlikely that the fsi and the ufsi
are going to be so closely matched at all. either the creation of the fsi
precedes that of the ufsi, in which case the fsi has a head start in an
exponential growth race, or the ufsi is not of human origin, in which case
all bets are off. in fact, my current understanding of si ecology suggests
that interstellar migration would incur severe costs for a mature si.
 
> And if the Friendly AI is capable of all this, I'm not so sure how
> perfectly friendly it would actually be itself.
>

        finally, the sysop isn't a human being. it doesn't have to suffer
from cognitive dissonance to the point of catatonic shock. we're not
talking about something as brittle as asimov's laws here. we can be pretty
sure that any mistake obvious to a human (such as sitting down and cry;
trying to save everyone but saving no one; etc.) will not be made. as to
what the actual stategy will be, a) it will depend on the situation, and
in any case b) it is probably beyond my meager alotment of intelligence.
my speculation: if it really comes down to it, some way of trading off one
sentience against another (or one aspect of it against another) probably
have to be made. if we retain the (imho naive) notion that all sentiences
are absolutely equal, then the default selection semantics might be
random.

-x



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT