RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jun 27 2002 - 10:24:21 MDT


> To summarize the summary, the main danger to Friendliness of
> military AI is
> that the commanders might want a docile tool and therefore cripple moral
> development. As far as I can tell, there's no inherent danger to
> Friendliness in an AI going into combat, like it or not.

In my view, the main danger to Friendliness of military AI is that the AI
may get used to the idea that killing people for the right cause is not such
a bad thing...

Your arguments for why Friendliness is consistent with military AI are based
on your theory of a Friendly goal system as a fully logical, rational thing.

However, I think that any mind is also going to have an associational
component, rivalling in power the logical component.

This means that its logical reasoning is going to be *guided* by
associations that occur to it based on the sorts of things it's been doing,
and thinking about, in the past...

Thus, an AI that's been involved heavily in military matters, is going to be
more likely to think of violent solutions to problems, because its pool of
associations will push it that way

Remember, logic in itself does not tell you how to choose among the many
possible series of logical derivations...

I don't want an AGI whose experience and orientation incline it to
associations involving killing large numbers of humans!

You may say that *your* AGI is gonna be so totally rational that it will
always make the right decisions regardless of the pool of associations that
its experience provides to it.... But this does not reassure me adequately.
What if you're wrong, and your AI turns out, like the human mind or
Novamente, to allow associations to guide the course of its reasoning
sometimes?

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT