Re: Military in or out?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Feb 26 2001 - 00:51:15 MST


This thread is wandering off-topic for SL4. I hereby veto further
discussion of the military's general attitude and research competence as
being more debatable than resolvable.

In terms of possible intereference, either their attitude is "bah humbug",
or not. In terms of possible independent research, either their research
totally flops, or something interesting might happen. I still think this
is an interesting and appropriate list topic, but try to just make one of
the preconditional assumptions, one way or the other, so you can get to
the SL4-specific scenarios.

My own view is that I'd prefer to see no military involvement at all in
"real" AI; the possibility of increased funding does not compensate for
the disadvantages of less open development, not to mention the possibility
of pitched intelligence-agency battles being fought inside my laboratory.
Given military involvement... well, asking an AI to harm humans today for
the greater good tomorrow is not the *total* ethical meltdown it would
represent in a human, but it does represent a significant compromise of
Friendliness.

If anyone asks me my views on AI and national security, my answer will be
"no first use" and "no escalation". The US should not be the first to
make AI research projects an intelligence acquisition target. The US
should not be the first to use AI for information warfare - not even in a
defensive capacity, since that would render AI research projects into
military targets. While I would reluctantly understand the necessity of
using automated-weaponry or combat-coordinator "humankiller AI" to defend
the US *against a similarly equipped enemy*, doing so would still
represent a serious compromise of Friendliness. The AI would value enemy
lives on the battlefield, and might reject the proffered utilitarian
rationale for specific cases; trying to prevent either of these conditions
would probably result in a total compromise of Friendliness. In summary,
being the first party to compromise Friendliness, for *any* reason, would
compromise planetary security, regardless of any short-term national
objectives served.

I'll fold that up and send it to the NSA, but only if *they* ask me first.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT