Re: Positive Transcension 2

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Thu Feb 19 2004 - 14:29:43 MST


Ben,

I've just finished reading your 14 February version of "Encouraging a
Positive Transcension".

It's taken me two reads of the paper to become clear on a few issues.

It seems to me that there are really three separate ethical issues at the
heart of the paper that have been conflated and they are: how can we
ensure that the next big advance in cognitive capacity in our neck of
the universe-

- is not a disaster for existing sentient beings (humans being the
    only ones we know of presently),

- doesn't fail to carry forward the gains made so far by existing
    creative sentient beings (including humans) and

- helps to drive (and does not prevent) further wondrous flowering of
    the universe.

While these issues clearly interrelate (will protecting existing sentient
beings lead to a stagnation in the flowering of the universe?) I think
there is something to be gained from being clear about each one.

And there is a special aspect to the first issue that shouldn't be
overlooked. The emergence of AGI is not some inevitable process that
Fate deals up to us. On the earth at least, it is the outcome of
deliberate actions by a few humans that could impact on the rest of
humanity (and perhaps a lot of the rest of the universe as well). So
while we discuss the ethics we want to see AGIs apply, we also need to
also think about the ethics of what we ourselves are doing. If we can't
get our own ethics sorted out then I'm not too hopeful we'll be able to
generate appropriate and adequate ethics in our AGI progeny.

So let's start with how some humans might feel about some other
humans creating a 'thing' which could wipe out humans without their
agreement.

Ben you said: "And this may or may not lead to the demise of humanity
- which may or may not be a terrible thing." At best loose language like
this means one thing to most people - somebody else is being cavalier
about their future - at worst they are likely to perceive an active threat
to their existence.

Frankly I doubt if anyone will care if humanity evolves or transcends to
a higher state of being so long as it's voluntary. To a timeless observer
it might be arguable that the humanity of 2004 (or whatever) is no
longer to be found - but the people who have evolved/transcended will
still feel like humanity of the new era - they will not have been
obliterated. To mix this sort of change up with the death of humanity
via, for example, rather un-necessary discussions of Nietzsche's
notions of "a good death" and "Man is something to be overcome"
seems to me to be pointless and dangerous. After the "bad death" of
many thousands of people in the Twin Towers the US has rained death
on many more thousands of people in the rest of the world. For AGI-
advocates to be cavalier about the lives of billions of people is to my
mind to, very understandably, invite similar very nasty reactions.

To withhold concern for other humans lives because theoretically some
AGI might form the view that our mass/energy could be deployed more
beautifully/usefully seems simply silly. The universe is a big place with,
most likely, a mind bogglingly large amount of mass/energy not used
by any sentient beings - so having a few billion humans on the Earth or
the nearby planets is hardly going to cramp the style of any self-
respecting AGI with a big brain.

I think the first step in creating safe AGI is for the would-be creators of
AGI to themselves make an ethical commitment to the protection of
humans - not because humans are the peak of creation or all that
stunningly special from the perspective of the universe as a whole but
simply because they exist and they deserve respect - especially from
their fellow humans. If AGI developers cannot give their fellow humans
that commitment or that level of respect, then I think they demonstrate
they are not safe parents for growing AGIs! I was actually rather
disturbed by your statement towards the end of your paper where you
said: "In spite of my own affection for Voluntary Joyous Growth,
however, I have strong inclinations toward both the Joyous Growth
Guided Voluntarism and pure Joyous Growth variants as well." My
reading of this is that you would be prepared to inflict Joyous Growth
future on people whether they wanted it or not and even if this resulted
in the involuntary elimination of people or other sentients that somehow
were seen by the AGI or AGIs pursuing Joyous Growth as being an
impediment in the way of the achievement of joyous growth. If I've
interpreted what you are saying correctly that's pretty scary stuff!

I think the next step is to consider what values we would like AGIs to
hold in order for them to be sound citizens in a community of sentients.
I think the minimum that is needed is for them to have a tolerant,
respectful, compassionate, live-and-let-live attitude. This is what I
personally would hope for from all sentients - no matter how low or
mighty their intellectual powers. This doesn't mean that all human
behaviours or all AGI behaviours should be accepted. Cruel or
exploitative or oppressive behaviours by any sentient or group of
sentients would seem to me to be behaviours that should be resisted or
prevented.

I think AGIs that had a tolerant, respectful, compassionate, live-and-let-
live ethic would not intrude excessively on human society. They might,
for example, try to discourage female circumcision or even go so far as
stopping capital punishment in human societies (I can't see that these
actions would conform to the ethics that the AGIs were given [under my
scenario] their human creators/carers). As far as I can see I don't think
that AGIs need to have ported into them a sort of general digest of
human-ness or even an idiosyncratic (renormalised) essence of general
humane-ness. I think we should be able to be more transparent than
that and to identify the key ethical drivers that lead to tolerant,
respectful, compassionate, live-and-let-live behaviour.

I think these notions are sufficiently abstract to be able to pass your test
of being likely to "survive successive self-modification". They are not
tied to a form of humanity that is frozen in time and they not tied
conceptually to any particular form of life or sentience. And I think this
base ethic would be useful in guiding how AGIs relate to each other.

If AGIs adopted a tolerant, respectful, compassionate, live-and-let-live
ethic then I think that we would have pretty good assurance that the
emergence of AGIs was not going to be a disaster for any existing
sentient beings (including humans beings) and that the gains made so
far by existing creative sentient beings would not be lost due to the
cavalier (or otherwise) annihilation of sentient societies by more
powerful AGIs.

Now I want to move on to the issue how ethical systems might ensure
that AGIs help to drive (and do not prevent) further wondrous flowering
of the universe.

Ben, you proposed that AGIs should have an ethic of promoting
voluntary, joyous growth. The way you discussed this issue it made it
sound as if all AGIs should have this goal/ethical structure. It's not
clear to me that all AGIs need such a goal structure for there to be a
wondrous flowering of the universe. The development of art/science
etc. that we love so much on the earth was the work of one species in
20 million. Perhaps only a small minority of AGIs need to be creative
or promoting "voluntary joyous growth" for there to be the unfolding that
you are hoping for.

My guess is that if we avoid human or AGI dictatorship, then humans of
all sorts will facilitate the creation of all sorts of AGIs. So long as these
AGIs all practice a tolerant, respectful, compassionate, live-and-let-live
ethic and so long as *some* AGIs pursue 'voluntary joyous growth' or
'growth in knowledge and development and application of creativity'
then I think a positive transcension will occur. I think we should look to
a plurality of AGI ethics as much or more than we should expect and
support a plurality of human goals and ethics. If people or AGIs are
happy with relatively unchanging lives (as we perceive it), then good on
them if that's what makes them happy or transcendent. It only takes a
small percentage of 'driven creatives' (whether human or AGI) to keep
evolution moving along. In a healthy mix of sentients it's probably also
a good idea if at least a reasonable percentage (5-10%???) are driven
by an urge to improve wellbeing for themselves and others - ie. not
driven pricipally by growth in knowledge/patterns or hedonistic joy!
This nead for a complementary mix of motivations within a
human/AGI/other sentient meta population is another reason for having
a plurality of AGIs rather than just one. (I know that mindplexes could
be formed by groups of AGIs - but I still think that even a mindplex will
be better quality/wiser if we encourage the creation of many, diverse
AGIs with distinct perspectives.)

In your paper you suggest that we need AGIs to save humanity from
our destructive urges (applied via advanced technology). If having
AGIs around could increase the risk of humanity being wiped out to
achieve a more beautiful deployment of mass/energy then it might be a
good idea to go back and check to see just exactly how dangerous the
other feared technologies are. While nanotech and genetic engineering
could produce some pretty virulent and deadly entities I'm not sure that
they are likely to be much more destructive than bubonic plague,
eboloa virus, small pox have been in their time etc. There are a lot of
people around so that even if these threats killed millions? billions?
they are unlikely to wipe out even most people. So should we seek
help from this scale of threat by creating something that might arbitarily
decide to wipe out the lot of us on a whim?

I don't think that AGIs are inevitably a threat, but this will only be if
AGIs are imbued with tolerant, respectful, compassionate, live-and-let-
live ethics and I think these will only be imbued if their human makers
are similarly moved by the same ethic.

-------

By the way, your interpretation of my idea of 'sustainability' as a form of
notalgia misses the point I was trying to make. I don't think one has to
be a Luddite or a back-to-the land/cave person to find some value in
the concept of sustainability. I think life for any creative sentient is
made up of three processes:

- changing things for the better driven by need or delight

- engaging in a journey of life where change occurs but the change
    cannot be characterised as better or worse than what went before

- retaining conditions/things that are valued because they are needed
    (from a utilitarian point of view) or they are valued
    existentially/morally etc.

These three processes combine a concern for both change and
continuity. At any particular moment not *everything* is changed nor is
*everything* retained (sustained).

The notion of combining continuity and change is particularly important
if you think that the purpose of change includes creating better
situations/things. If things can be improved then it is sensible to protect
these improvements from back-sliding when yet further changes are
made.

Nostalgia plays a part in this for some people but generally I suspect a
fairly small part in this whole process of managing/fostering continuity
and change.

----
Oh well, I hope what I've said is of some use!
Cheers, Philip


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT