Re: AI debate at San Jose State U.

From: Richard Loosemore (rpwl@lightlink.com)
Date: Fri Oct 21 2005 - 12:23:25 MDT


Olie Lamb wrote:
> Richard Loosemore wrote:
>
>> <Snip>
>>
>> As far as I am concerned, the widespread (is it really widespread?)
>> SL4 assumption that "strictly humanoid intelligence would not likely
>> be Friendly ...[etc.]" is based on a puerile understanding of, and
>> contempt of, the mechanics of human intelligence.
>
>
> It's not difficult to show this by a definitive fiat:
>
> Woody Long defined Humanoid intelligence:
>
> 1. "Humanoid intelligence requires humanoid interactions with the world" --
> MIT Cog Project website
>
> This means a fully "human intelligent" SAI must...feel the thrill of
> victory and the agony of defeat.
>
> If we accept that to be "humanoid", an intelligence must get pissed-off
> at losing, we can also define "humanoid" as requiring self-interest
> /selfishness, which is exactly the characteristic that I thought
> friendliness was trying to avoid. An intelligence that cares for all
> intelligences will necessarily care for its own well being. Putting
> emphasis on one particular entity, where the interests are particularly
> clear, is the start of unfairness. Strong self interest is synonymous
> with counter-utility. You don't need to get stabby and violent for
> egocentrism to start causing harm to others. Anyhoo, strong self
> interest does not necessarily lead to violent self-preservation, but it
> has a fair degree of overlap.

This comes very precisely to one of the points that I wanted to discuss
in some of my earlier posts to this list.

Is it really the case that to be humanoid, an intelligence must get
pissed off at losing, feel selfishness, etc.?

The answer is a resounding NO! This is one of those cases where we all
need to take psychology more seriously: from the psych point of view,
the motivational/emotional system is somewhat orthogonal to the
cognitive part .... which means that you could have the same
intelligence, and yet be free to design it with all sorts of different
choices for the motivational/emotional apparatus.

To be sure, if you wanted to make a thinking system that was *very*
human-like you would have to put in all the same mot/emot mechanisms
that we have. But when you think about it a bit more, you find yourself
asking *whose* mot/emot system you are going to emulate? Hannibal
Lecter's? Mahatma Ghandi's? There is an enormous variation just among
individual instances of human beings. Arguably you can have people who
are utterly placid, selfless and who have never felt a violent emotion
in their lives. And others with a violence system that is just
downright missing (not just controlled and suppressed, but not there).

The idea of "self-interest" is, I agree, slightly more subtle. Self
interest might not be just an ad-hoc motivational drive like the others,
it might be THE basic drive, without which the system would just sit
there and vegetate. But I believe there is strong evidence that, in
humans, you can have people who are very strongly motivated to think and
learn, and yet who are also selfless and even self-sacrificing (in other
words, it looks like self-interest may not be required for the creature
to be intelligent).

I would not presume to try to answer your question fully in these few
paragraphs. What I do want to establish, though, is that we tend to
make simple assumptions about the relationship between the motivational
and emotional characteristics of an AGI and the mot/emot systems we see
in humans ... in the specific sense that we often think that if we make
the thing at all "humanoid" then we get the entire human mot/emot system
as a package. That is not true. So we as AGI researchers need to think
about this connection in a great deal more depth if we are going to come
to sensible conclusions.

I will now jump forward and say what I believe are the main conclusions
that we would come to if we did analyse the issues in more depth: we
would conclude that *if* we try to build a roughly humanoid AGI *but* we
give it a mot/emot system of the right sort (basically, empathic towards
other creatures), we will discover that its Friendliness will be far,
far more guaranteeable than if we dismiss the humanoid design as bad and
try to build some kind of "normative" AI system.

I don't have time to justify that position right now, but I want to
throw it out there as a possibility. But we should at least discuss all
the complexity and subtlety involved in humanoid motivational/emotional
systems so we can decide if what I just said was reasonable.

After all, we agree that Friendliness is important, right? So should we
not pursue the avenue I have suggested, if there is a possibility that
we would arrive at the spectacular, but counterintuitive, conclusion
that giving an AGI the right sort of motivational system would be the
best possible guarantee of getting a Friendly system?

Richard Loosemore



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT