RE: continuity of self [ was META: Sept. 11 and Singularity]

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Sep 17 2002 - 09:19:00 MDT




Chris Rae wrote:
> Well, there are 2 possible outcomes - AGI will be for human life
> or against
> it. The chance that AGI will ignore us & leave us to our own devices is
> small. If AGI is against us, we don't stand a chance - we'll all be dead.
> This leaves the only viable option is that AGI will help us - all the
> others IMO are basically irrelevant.

I do not accept this dichotomous thinking.

Are we humans for or against dolphins?

What about an Honest Annie type AI (see the story by Stanislaw Lem), that
simply ignores us?

There are VERY many possibilities outside the human-politics dichotomy of
"for us or against us"


> >Even when an AGI has transcended us, it may still end up
> carrying out some
> >of our agendas implicitly, via the influence of its initial
> condition on its
> >later state.
>
> How will it be possible to restrict which initial conditions the
> AGI is not
> able to alter?

This ground has been covered very thoroughly on this list.

Of course, a highly advanced AGI will be able to alter itself. However, its
initial conditions partially determine the alterations that it will have the
inclination to make.

This notion is central to Eliezer's approach to Friendly AI, as well as my
own slightly different approach.

-- Ben G






This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT