Re: Encouraging a Positive Transcension

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Wed Feb 11 2004 - 09:28:34 MST


Ben,

Which list do you want Encouraging a Positive Transcension discussed
on? AGI or SL4? It could get cumbersome having effectively the same
discussion on both lists.

Thanks for the paper. it was stimulating read.

I have a few quibbles.

You discuss the value of aligning with universal tendencies. How can
you know what's really universal since were in a pretty small patch of
the universe at a pretty circumscribed moment in time? If we
happened to be in a universe that oscillated between big bangs what
looked universal might be rather different in the expanding universe
phase and in the contracting universe phase.

Also socially certain things look universal until some new possibilities
pop up. If judged at an early stage of the said popping, then new thing
might look like a quaint Quixotic endeavour. It only after the quaint
quixotic idea becomes widespread that its inherent universalism
becomes clear.

On the subject of growth - do you really want to foster growth per se
(quantitative moreness?) or development (qualitative improvement?)??

I’ve got a feeling that promoting growth or even development is not a
rounded enough goal. I think there’s something powerful in the idea of
promoting (using my pet terminology) ‘genuine progress’ AND
‘sustainability. So at all times philosophising entities are considering
what they want to change for the better for the first time and they are
also thinking about what should be maintained from the past/present
and carried through into the future. So both continuity and change are
important paired notions.

Your principle that "the more abstract the principle, the more likely it is
to survive successive self-modification" seems to make intuitive sense
to me.

You said: “that in order to make a Megalomaniac AI, one would
probably need to explicitly program an AI with a lust for power.”
Wouldn’t it be rather easy to catch the lust for power bug from the
humans that raise an AGI - or even from our literature? I think there’s
a high chance that at least a few baby AGIs will be brought up by
megalomaniacs. And one super-powerful megalomaniac AGI is
probably more than we want and more than we can easily deal with.

If humans are allowed to stay in their present form if they wish and if
some humans as they are now might go dangerously berserk with
advanced technology and we go down the AI Buddha path then the
logic developed in your paper seems to suggest that AI Buddhas will
have to also take on a Big Brother role as well as whatever else they
might do.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT