RE: Military Friendly AI

From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Thu Jun 27 2002 - 16:45:38 MDT


 Eliezer S. Yudkowsky [mailto:sentience@pobox.com] wrote:

Sure, when the AI is young. When the AI grows up I would expect it to rerun

the programmers perceived considerations involved in their moral decisions,
come to the conclusion that violent solutions were not as desirable as it
was told (assuming the "honorable soldiers" are not actually correct!),
models its own growth for biases that could have been introduced, and washes

out the biases. Of course a Friendly AI has to be able to do this once it
grows up! It's not just a military question!

### Do you expect that the search for the external referent *must* yield a
single structure? As Ben mentioned, associations and rules-of-thumb will
play a role in the decision making, since an "ab initio" modeling of the
programers, and the basis of their ethical convictions is likely to be
impossible to directly ascertain. So the individual AI's experience will
have lasting effects on its behavior, not correctable by analysis of the
past, requiring experimental verifications (actually doing something and
observing the long-term effects). Maybe if the experiment goes on for a long
time, the AI will finally reach the objectively defined Friendliness but
this is not likely to happen in an environment complicated by competing
AI's. Instead ve could end up in a limbo not subjectively (AI-level)
distinguishable from true Friendliness.

Say, the FAI is informed by the other AI's (during mortal combat) that vis
empirical data ve used to derive vis current practical implementation of
Friendly behavior, is untrustworthy. The other AI's provide plausible but
not fully verifiable explanations. Will the FAI change its Friendliness?
Will ve trust the others or vis own commanding ethics officer? How will ve
deal with uncertainty at a level much higher than the analysis of human
motivations - at the level of other SAI's?

-------
> As you later say, the growing up of the infant AI might be unfavorably
> affected, in the distant analogy to the detrimental effects of early
> childhood emotional trauma in humans.

This is an appealing analogy which happens to be, as best I can tell,
completely wrong.

### Yes, it's just a little eristics trick of mine, an appealing analogy of
no intrinsic truth value.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT