Re: Cold-War Disarmament Activism

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Wed Jun 28 2006 - 14:50:04 MDT


On Jun 28, 2006, at 7:31 AM, Joshua Fox wrote:
> 2. Therefore, though I also admire and support those who work to
> avert Singularity disaster and to bring a Friendly Singularity, an
> uncertainty-weighted cost/benefit analysis based on this analogy
> suggests that one need not devote time and money to the Friendly
> Singularity, just as most people who supported human survival did
> not give resources to the disarmament movement.

I'm not following your reasoning at all because of what appears to be
a badly broken analogy.

MAD in a nutshell:

MAD only works if there is an effective upper bound on the amount of
destruction you can cause your opponent in the sense that at some
point there is really nothing left to destroy. If two opponents have
the capability to reliably cross that upper bound, all differences in
capability beyond that threshold can generate no decisive advantage.
In such races, there is no advantage in increasing your capability to
destroy beyond a certain level, only in reducing your opponent's
capability to destroy.

The reason it does not apply to AGI is simple: there is no level of
capability beyond which growth and improvements do not generate
decisive advantage. You basically conflated a positive feedback loop
with a negative feedback loop in your analogy.

J. Andrew Rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT