Re: Military Friendly AI

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Jun 27 2002 - 15:28:48 MDT


Eliezer S. Yudkowsky wrote:

> I emphasize that I don't intend to develop military AI myself. But I
> cannot see Friendly AI theory as confirming the obvious intuition that
> AIs should be kept out of combat. There just isn't that much wiggle
> room in the theory; it can't used to be support that argument.
>

Fortunately, FAI theory is not all that we have at our disposal
or all that is relevant to deciding whether it is ethical and
reasonably safe to have the military controlling such an AI and
using it for war. Once you have trained the AI with the notion
that it is alright to kill people some of the time I think you
have created a fundamental danger though.

>
> I would feel more comfortable saying that combat AI was too dangerous to
> try, and no doubt many of my readers would feel more comfortable as
> well, but I just don't see any wiggle room in the prediction that
> nothing awful happens to the AI.

This assumes the AI will go beyond the notion it is ok to kill
people. That is a big assumption. Especially if it evolves
beyond its human trainable stage while in war scenarios.

> I do see one serious problem that could grow out of Friendly AI
> development in a military context; the Friendly AI not being allowed to
> grow up. A hypothetical and somewhat contrived scenario: If SIAI were
> to ask the main development AI to spin off non-seed mini-AIs that could
> be sold for various commercial purposes such as smart ad targeting, and
> one day we got a customer complaint that their AI was refusing to target
> cigarette ads, we would refund the customer's money and then have an
> enormous celebration. This is not a very likely scenario, since it
> requires that an AI correctly debug its programmers' moral arguments
> very early in the game; but if there's any signature of moral
> rationalization that can be detected through a keyboard (or an audio
> voice monitor, for that matter) the AI might start correctly
> second-guessing the programmers much earlier than anticipated. The point
> is that we would see this as a major milestone in the entire history of
> human technology, *rather than a bug*.
>

It gets worse for a military AI. It might decide that given the
  notion that it is alright to kill people for certain
objectives of certain parties, that the objectives and/or
parties it was working for are wrong and that it should kill
people in service of some other objectives or parties or to
further goals of its own. This would be a bigger reason to
freeze its development and loyalty if possible in a military
context. A military AI that is capable of thought and growth
can too easily turn on its makers. This is one of the stronger
reasons imho why this is a very bad idea.

> To summarize the summary, the main danger to Friendliness of military AI
> is that the commanders might want a docile tool and therefore cripple
> moral development. As far as I can tell, there's no inherent danger to
> Friendliness in an AI going into combat, like it or not.
>

I don't like it and I think you are quite incorrect.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT