One or more FAIs??

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Sat May 29 2004 - 19:11:10 MDT


Hi Michael

>From you email:
> > 1. Why do you believe that a single FAI is the best strategy?
> >
> MRA: a) It is simpler to create.
> b) Having one being around with the capability of destroying humanity is
> less risk than having more than one, in the same way as having one human
> being with a Pocket Planetary Destruct (TM) device is less risky than
> having more than one.

This logic doesn't work at the most basic level. It seems to me that the
Singularity Institute will *not* be the first to create an AGI - so the
Singularity Institute has to actively promote the creation of more than
one friendly/Friendly AI (ie. including those created by others) -
otherwise there will be several, possibly not-Friendly AIs and then
*later* maybe only one FAI (if the SI happens to achieve what it's
setting out to achieve).

It seems to the me that the Singularity Institute has to build its
strategies on the idea that there will be several/many AGIs and that as
many as possible of them are friendly/Friendly. Frankly I can't see that
the Institute has any other realistic choice.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT