Re: Military in or out?

From: Brian Phillips (deepbluehalo@earthlink.net)
Date: Sun Feb 25 2001 - 19:46:29 MST


Eliezer,
  I am more of a biomedicine geek than an infosystems geek. I am still
running ...well I won't admit what I run on my puter (::wince::).
   I can however assure you that UNLESS
your commentary on the difficulties inherent in constructing near-human
and transhuman AI are utterly spurious
 then the Military/Industrial complex will NEVER cook up enough novelty to
create superintelligent AI. They might "buy" it but would never be able
to "build" it!
  The DoD/contractors probably will invest heavily in pre-superintelligent
AI.
 It's just a better way to organize, which the mil DOES understand all too
well.
  But the theory here is that you have to be absolutely sure the sword will
not
turn on you before you let it loose. Militaryfolks are paid to be paranoid.
  If an superintelligent AI could not be utterly guaranteed to be loyal it
would not be funded.
  I will go so far as to say the military programmers would deliberately use
less than optimal AI rather than run ANY risk that the "tool" would be
disloyal.

brian
d e e p b l u e h a l o

----- Original Message -----
From: Eliezer S. Yudkowsky <sentience@pobox.com>
To: <sl4@sysopmind.com>
Sent: Sunday, February 25, 2001 1:44 PM
Subject: Re: Military in or out?

> Ben Goertzel wrote:
> >
> > >
> > > There are no military applications of superintelligence.
>
> > c) a statement that once a system becomes REALLY SUPERINTELLIGENT it
will no
> > longer have
> > any motivation to serve any particular country above any other
one...
> > [My guess as to your intended meaning]
>
> Actually, I meant that - even assuming you can build a totally tame SI -
> ve still has no military applications. If SIs that are "toollike" rather
> than "mindlike" (structured around immediate instructions rather than
> supergoals) are possible, you might tell a "toollike" SI to kill every
> living thing in Iraq down to bacteria - I can't visualize ver fighting a
> battle or directing a tank brigade. There are no military applications of
> superintelligence because a superintelligence is outside the domain where
> "military" makes sense. Pre-superintelligent AI may have military
> applications; superintelligence, as Nick Bostrom meant the phrase, has
> none. (Nick Bostrom's original post nowhere states that SI has military
> apps; I'm not sure whether this was deliberate.)
>
> Regardless of what conclusions the military eventually comes to, my own
> conclusion is that superintelligence is not a military matter. Any AI
> which is *not* superintelligent is another question.
>
> But your questions are still valid, so...
>
> > Let's assume your meaning is c). Then I have a question for you. Which
is:
> > Why couldn't the
> > CIA create a self-modifying AI whose supergoal was "Serve the USA."
I.e.,
> > "Be Friendly to the USA."
> >
> > You posit that the supergoal "Be Friendly to Humans" can remain a fixed
> > point throughout the successive
> > reason-driven self-modification events that will constitute the path
from
> > initial AI to superintelligent AI.
> >
> > But, I'm not seeing why the supergoal "Serve the USA" couldn't serve as
an
> > equally adequate supergoal, from
> > the perspective of the development of intelligence. Things like "learn"
and
> > "gather information" and
> > "survive" are subgoals of "Serve the USA" just as they are of "Be
Friendly
> > to Humans."
>
> "Serve the USA" is compatible with generic supergoal semantics and
> external reference semantics; I don't think it's compatible with any
> greater philosophical sophistication on the part of the AI (i.e.
> anchor/shaper semantics or causal rewrite semantics). Subgoals can be
> convergent given supergoals. I believe that supergoals can be convergent
> given a panhuman grounding for philosophy. If you took "almost any" human
> and slowly but steadily amped up their intelligence, they would sooner or
> later write more or less the same definition of Friendliness - that's what
> I'm hoping for.
>
> Regardless of whether the complete definition is convergent, I would
> expect certain aspects of it to be convergent - for example, symmetry
> among all humans as a moral principle. If you took a military AI
> programmer and started slowly increasing her intelligence, she would
> probably stop thinking in terms of "Serve the USA" as a supergoal and
> start thinking in terms of "Serve the USA" as a subgoal of peace and
> freedom for everyone in the world. This process might be slow and awkward
> if the militaryfolk are totally lost to common-sense morality and don't
> identify with the AI - for example, if the AI asks "Is a human life in the
> USA really more *intrinsically* valuable than a human life in Iraq?" and
> the programmers answer "That is your mission priority"; if the AI asks
> "Wasn't Martin Luther King a nice guy?" and the militaryfolk answer "Not
> from your perspective."
>
> If the AI isn't built to be sensitive to programmer intentions - isn't
> built to be philosophically robust - then ve may never get past vis
> original supergoals, and may equally interpret those supergoals to mean
> "Duplicate copies of the USA, as it existed at the time defined, using all
> available matter to build as many copies as possible."
>
> Q: "Can non-Friendly AI or toollike AI be built?"
> A: "Not safely."
>
> I hope that AI never becomes the subject of an arms race - that the
> Singularity is finished, over and done with, before AI becomes regarded as
> a national security asset, or regarded as a target of acquisition by
> intelligence agencies. Failing that, I hope that military organizations
> are smart enough to develop AIs that serve their home country as a subgoal
> of Friendliness - rather than compromising planetary security by
> compromising sensitivity to programmer intentions. Failing that, I hope
> the military researchers are so dumb that their AIs are not competitive
> with those developed by those organizations that are smart enough to
> embrace Friendliness.
>
> Failing that, we're screwed.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT