Re: Military applications of SI

From: James Rogers (jamesr@best.com)
Date: Tue Mar 06 2001 - 16:13:35 MST


At 03:20 PM 3/6/2001 -0600, Jimmy Wales wrote:
>James Rogers wrote:
> > Machines of this level of capability would be vastly more expensive
> (and on
> > many levels, more complicated) than their human counterparts.
>
>Today, this is certainly true. But the whole point of singularity-thinking,
>of thinking about the implications of Moore's Law carried out for just another
>20-30 years, is that machines of this level of capability will become
>relatively
>_cheap_.

The problem is that if we presume these levels of technology, then we are
also almost necessarily presuming a huge number of other changes that would
render current military doctrine largely obsolete. In other words, the
assumption of the technological ability obsoletes the scenario required for
the conclusion -- this change in technology won't happen in a vacuum with
respect to other military hardware or even to the concept of military
conflict itself.

>But let's think about what might be possible with computers at 1,000
>or 100,000 or 1,000,000 times the power of our current computers *per
>dollar spent*. I'm not sure what shape things will take, of course.
>I'm just saying that it seems obvious to me that having _smarter_
>machinery is going to have important military implications.

You don't understand the actual limits on military hardware. In the real
world, hardware is largely limited by physical/mechanical limits rather
than computational limits. The most advanced, fastest systems produced by
the U.S. today (i.e. systems that are still in the testing stage) in terms
of real-time multiple target discrimination and evaluation typically have a
MIPS R2000/R3000 or Motorola 68K as their central
processors. Why? Because, these "old/slow" chips have more than
sufficient computational power for even the most advanced real-time
military applications. Today, the fundamental limits on military
technology is material, as they routinely bump up against the theoretical
limits of material strength, hardness, toughness, heat resistance,
strength-to-weight ratios, etc. which are compounded by mechanical design
considerations. Speed is the king, and computers can already generate
directions faster than mechanical devices can physically respond to
them. Therefore, most advances in military capabilities today are done
through superior materials and mechanical engineering rather than through
computational power.

>Now think of a super advanced 50 caliber machine gun...
>
>We aren't going to ask that it be a full-blown AI, or anything like
>that. The soldier carries it around like a regular machine gun, but
>it has a 'full-auto' mode like the mode on my camera. When the soldier
>pulls the trigger and start spraying bullets, the gun actually times
>the firing of the bullets with an eye towards actually hitting hostiles
>and actually trying to miss friendlies.
>
>That requires a lot of intelligence. It requires a serious ability to
>do visual discrimination, and a lot of thought has to go into reducing
>false positives.

Impractical, for a number of reasons. First, you aren't always trying to
shoot people with machine guns; there are many good battlefield reasons to
shoot virtually anything and clever use of "non-shootable" targets could
hinder completion of your mission. Second, humans frequently can't
discriminate friends from enemies even at arms length, and certainly not at
a thousand meters; the enemy is not always obvious and may not lend
themselves to easy machine discrimination -- what if the enemy is wearing
your uniform? Also, what do you do when an enemy in civilian clothes hides
his gun when you point your machine gun at him -- does he become a
non-combatant in the eyes of the machine since the machine never saw the
gun? There is a huge amount of visual and non-visual context used in
target discrimination by humans that would be virtually impossible to
implement in the kind of system you are talking about and I can easily
think of dozens of real-world examples that would almost certainly cause
problems for the machine. Third, the machine would have to make this
decision *and* re-direct the barrel in a few hundredths of a second through
the muzzle blast (which will screw up your sensors) and recoil, never mind
that the very rough and somewhat erratic recoil will play hell on your
optics and control systems.

And fourth, there is no bloody way a single soldier would carry a .50 BMG. :^)

>My only general point is that I see military implications for AI, all the way
>up the chain. Cool, clear-headed rational thought (particularly specialized
>thought about identifying hostiles and friendlies) is something that machines
>can provide on the battlefield.

In other words, machines will be used in a sensor data evaluation, fire
control, and battlefield advisory role, like they are today. Humans will
still pull the trigger.

-James Rogers
  jamesr@best.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT