Re: One or more FAIs??

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sat May 29 2004 - 19:26:15 MDT


Philip,

You did not respond to either of the answers I gave to Mark's question. You
wrote about related but different issues... which I would be happy to
comment on.

IRT SingInst promoting development of FAI: It is.

IRT SingInst's control over other people creating AIs: It has no control,
other than what it can muster using good ideas well presented, and
reasonable arguments.

MRA

----- Original Message -----
From: Philip Sutton
To: sl4@sl4.org
Sent: Saturday, May 29, 2004 18:11
Subject: One or more FAIs??

Hi Michael

>From you email:
> > 1. Why do you believe that a single FAI is the best strategy?
> >
> MRA: a) It is simpler to create.
> b) Having one being around with the capability of destroying humanity is
> less risk than having more than one, in the same way as having one human
> being with a Pocket Planetary Destruct (TM) device is less risky than
> having more than one.

This logic doesn't work at the most basic level. It seems to me that the
Singularity Institute will *not* be the first to create an AGI - so the
Singularity Institute has to actively promote the creation of more than one
friendly/Friendly AI (ie. including those created by others) - otherwise
there will be several, possibly not-Friendly AIs and then *later* maybe only
one FAI (if the SI happens to achieve what it's setting out to achieve).

It seems to the me that the Singularity Institute has to build its
strategies on the idea that there will be several/many AGIs and that as many
as possible of them are friendly/Friendly. Frankly I can't see that the
Institute has any other realistic choice.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT