Re: Diaspora, the future of AI & value systems, etc.

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Dec 12 2000 - 12:24:08 MST


Ben Goertzel wrote:
>
> The point is, why don't humans routinely kill other humans, in most
> cultures?
>
> Is it only because they're afraid of what will happen when they're caught?

Not just they, but their *genes*, are afraid of getting caught. When a
person is afraid of being caught, it expresses itself as fear. When a
gene is afraid of being caught, it expresses itself as an honorable
commitment not to kill.

There's also a memetic selection effect. If John Doe says to Sally Sue,
"My philosophy is: Look out for John Doe", Sally Sue doesn't hear "Your
philosophy should be: Look out for John Doe." Sally Sue hears "Your
philosophy should be: Look out for Sally Sue." It is literally
impossible to communicate a speaker-biased meme; the speaker bias is
always transformed into a listener bias. Since this does not help John
Doe in any way, and since we need (needed in the ancestral environment) to
argue moral issues in realtime, we have a built-in philosophical instinct,
a bias towards framing moral issues in ways that are *overtly*
observer-independent... however subtly, or bluntly, the moral rules we
propound may be chosen on the basis of how they favor the speaker.

As I said:

> Creating an AI that "loves mommy and daddy" may not produce anything like
> the results that you would get if you added a "loves mommy and daddy"
> instinct to a human. In fact, I'm seriously worried about the prospect of
> AIs with "instincts" at all. Humans don't just have instincts - for
> survival, for compassion, for whatever - but whole vast hosts of evolved
> complexity that prevent the instincts from getting out of hand in silly,
> non-common-sensical ways. In some ways, humanity is very, very old -
> ancient - as a species. We can do things with instincts that you can't
> expect to get if you just pop instincts into an AI. In all probability,
> we can do things that you couldn't expect if you just popped instincts
> into a pure superintelligence(!)
>
> Creating an AI whose goal is to "maximize pleasure" is really dangerous,
> much more dangerous than it would be to tell a human that the purpose of
> life is maximizing pleasure.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT