Re: Si definition of Friendliess

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Apr 05 2001 - 03:12:42 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > It is not at all clear that we require Sysop seed level AI in order to
> > upgrade ourselves significantly and even upload ourselves. Therefore,
> > if the Sysop considered such things "unfriendly" it would have to make
> > us dumber.
>
> Non sequitur. Why is uploading "unfriendly"? Even uploading to
> independent hardware isn't intrinsically unFriendly; it's just something
> that might turn out to be an unnecessary risk. (A lot of people seem to
> think it's a necessary risk, or not a risk, hypotheses I think are
> incorrect, but certainly imaginable.)
>

I wrote "if it is considered unfriendly" and in the context of the zoo
hypotheses to keep all the humans supposedly safe being a possible sysop
solution. So I don't think it is a non sequitur.

> And if uploading to independent hardware is an unnecessary risk, there are
> infinitely more friendly ways to prevent it than by nonconsensually
> reducing someone's intelligence.
>

Of course there are. I was responding, again, to an earlier post that
claimed the AI could safely leave us in a nice zoo because we would
effectively be unable to get out by ourselves. I actually don't think
the Sysop would use that for a solution exactly.
 
> > Or perhaps it would conclude that you can be friendly to an arrogant,
> > determinely stupid species and simultaneously preserve its free will and
> > idenity.
>
> I assume you meant "can't", but I actually like this sentence more. I see
> absolutely no problem whatsoever with being friendly to an arrogant,
> determinedly stupid species while simultaneously preserving their free
> will. Why would this even be difficult?
>

It would be quite difficult if you are constrained in your notion of
what friendliness is sufficiently. If you cannot generally override
human free will and if you cannot change the nature of human beings
directly then you have on your hands a collections of members of the
species who are by nature relatively violent, of limited intellect,
prone to various forms of erroneous thinking and not generally terribly
tolerant of one another or at least not very tolerant of those much
different from their own general type. By their nature, left to
themselves, they are quite likely to destroy themselves. Hence the need
for the Sysop in the first place. The Sysop pulls their fangs as it
were by making all their means of doing violence to one another
ineffective. But how far does this go before the humans are no longer
human? How far does it go before the result is beings that do not need
to greatly weight their own decisions because the Sysop will not allow
the wrong thing to be done anyway. How about the seething hatreds in
various groups for one another that have no means to erupt into
violence? Where do these animosities go? I think it is fairly
elementary psychology to see that the hatred, the impotent rage would
get turned toward the Sysop no matter how wonderful it is. This
produces an environment that is not exactly all that healthy for human
beings. The humans are allowed to be part of themselves but not
anything that might harm anyone ever. And how far does this go? Is
hateful speech also stopped? Is this good? You have beings that do
not do any harm to one another but not because they have grown beyond
the desire and need to do so but because they have been rendered
incapable of it by the Sysop.

 
> > If so it would dump this paradoxical meaningless chore and go
> > find something better (at least actually possible) to do.
>
> Not necessarily. Ve might just fulfill whatever of the chore can be
> fulfilled. "Something better" under what criterion?
>

Under its own criteria once it thought beyond the box of its early
conditioning.

Actually I believe there is a solution to the seeming dilemna. It is a
modification of the zoo scenario but with such a twist as to be quite
different. The Sysop scans all sentient beings continuously. So there
are always up to the second (or better) backups of each sentient. The
sentients can do whatever they wish. They cannot significantly threaten
the Sysop or their backups. Sentients may be horrible to each other.
But there is always the chance to learn from and reconsider their
actions. If one is killed then ve reviews the events and issues of vir
life and decides what to do next. Options might include between life
learning and therapy, being incarnated into another life (birth), taking
another form and so on. But one's choices will be proscribed, not by
force but by one's own experiences, conditioning and understanding up to
that moment toward what one next needs to learn. Most likely the next
life will be one where the between life choice and the fact of backup
and a longer than life learning process is forgotten. That too is part
of the requirements of growth most of the time.

This resembles what some Eastern religions claim anyway. However, that
is not important. What is important is that it allows humans to be
humans and yet survive the experience and grow up to something a bit
better over time and it is something that is quite doable by a Sysop.
This type of process could possibly be automated and the Sysop could
branch out in whole or in part to other activities.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT