RE: continuity of self [ was META: Sept. 11 and Singularity]

From: Chris Rae (cmlrae@xtra.co.nz)
Date: Tue Sep 17 2002 - 08:11:32 MDT



>The idea that no AGI's will allow themselves to be used to implement agendas
>besides liberation, just seems terribly overoptimistic to me.
>
>How can you know this?
>
>There is an awful lot we don't know about what AGI's will be like.
>
>I think Eliezer is overconfident about the success of his Friendly AI
>methodology, but you take overconfidence about the nature of future AI's to
>a whole new level!!!!

Well, there are 2 possible outcomes - AGI will be for human life or against
it. The chance that AGI will ignore us & leave us to our own devices is
small. If AGI is against us, we don't stand a chance - we'll all be dead.
This leaves the only viable option is that AGI will help us - all the
others IMO are basically irrelevant.

>Even when an AGI has transcended us, it may still end up carrying out some
>of our agendas implicitly, via the influence of its initial condition on its
>later state.

How will it be possible to restrict which initial conditions the AGI is not
able to alter?







This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT