Re: Diaspora, the future of AI & value systems, etc.

From: Gordon Worley (redbird@rbisland.cx)
Date: Tue Dec 12 2000 - 13:27:28 MST


At 2:24 PM -0500 12/12/2000, Eliezer S. Yudkowsky wrote:
>Ben Goertzel wrote:
>>
>> The point is, why don't humans routinely kill other humans, in most
>> cultures?
>>
>> Is it only because they're afraid of what will happen when they're caught?
>
>Not just they, but their *genes*, are afraid of getting caught. When a
>person is afraid of being caught, it expresses itself as fear. When a
>gene is afraid of being caught, it expresses itself as an honorable
>commitment not to kill.

Why a gene? How are genes afraid? From the gene research I've read,
all that genes have been proven to determine is physical traits, not
natural behavior. I realize that this is an Argument from Ignorance,
but maybe we are both thinking about the same concept: natural laws.
Humans make a commitment not to kill because that is the way the
system works. It works that way because it is the most selfish way
it could work, since when other people are alive they can do more for
one than when dead. If we're not think the same thing, can you
please explain what you mean by genes being afraid and how that leads
to a commitment not to kill? Are you thinking in the sense that
people want their genes to be carried down into other generations,
and if they die then their genes will end there?

> > AIs with "instincts" at all. Humans don't just have instincts - for
> > survival, for compassion, for whatever - but whole vast hosts of evolved
>> complexity that prevent the instincts from getting out of hand in silly,

Maybe I just don't know my psychology that well, but as I recall
instincts, as in born in traits, have not been proven to exist as
widely as you are claiming. I realize that, for practical purposes,
instincts don't differ very much from natural laws, but if AIs must
follow natural laws, then they have no need for insticts and will
develop complex behavior based solely on the nature of their
existance.

> > Creating an AI whose goal is to "maximize pleasure" is really dangerous,
>> much more dangerous than it would be to tell a human that the purpose of
> > life is maximizing pleasure.

Maybe so, since pleasure is not necessairly the best choice, but an
AI that is completely selfish should be safe, or at least be fair.
If ve is not programmed to be selfish, then ve has not been given the
same standing as you or me get, even if ve is more intelligent than
us. If ve were not selfish, then how could ve live in coexistance
with the selfish humans? An unselfish AI would end in destruction in
one way or another, because ve eliminates problems that keep humans
in check or destroyes humans in the process of being altruistic to
some other group, whether they be humans or AIs.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT