RE: Positive Transcension 2

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Feb 19 2004 - 22:31:33 MST


Philip,

   ***
   So let's start with how some humans might feel about some other humans
creating a 'thing' which could wipe out humans without their agreement.
  ***

  Well, I don't think that most humans understand the predicament of the
human race very well.

  If people don't understand the existential risks posed by other
technologies, how are they going to be able to participate in a serious
cost-benefit analysis regarding the creation of AGI's of various types?

  I'm a fan of the democratic process, and yet, I'm also a bit skeptical of
the ability of this process to make the right decisions in this kind of
area....

  So much of the world population is religious ... of course they are going
to feel TOTALLY DIFFERENTLY about the various existential risks and the
benefits of transhumanity, than nonreligious folks of transhumanist bent...

  Do you really think that we should proceed with these technologies via
some kind of global majority vote? Bear in mind that around 80% of the
world population believes in reincarnation...

   ***
    Ben you said: "And this may or may not lead to the demise of humanity -
which may or may not be a terrible thing." At best loose language like this
means one thing to most people - somebody else is being cavalier about
their future - at worst they are likely to perceive an active threat to
their existence.
  ***

  You can call it cavalier -- I call it honest and open-minded. I guess if
you take that quote out of context it can sound scary, but why do you need
to take it out of context?

  ***
  Frankly I doubt if anyone will care if humanity evolves or transcends to a
higher state of being so long as it's voluntary.
  ***

  This is very naive -- very many people mind voluntary transhumanist
actions even if they're milder than transcension. Psychedelic drugs are
illegal, as are smart drugs, homebrew neuromodifications, etc. etc. etc.

  Experimentation with stems cells is barely legal, for Chrissake !!!!

  Again, you seem to overestimate the rationality and wisdom of the "mass
mind"

  ***

  To withhold concern for other humans lives because theoretically some AGI
might form the view that our mass/energy could be deployed more
beautifully/usefully seems simply silly.
  ***

  I do not advocate witholding concern for other humans. I'm sorry if what
I wrote was misinterpreted that way.

  * **
  I think the first step in creating safe AGI is for the would-be creators
of AGI to themselves make an ethical commitment to the protection of
humans - not because humans are the peak of creation or all that stunningly
special from the perspective of the universe as a whole but simply because
they exist and they deserve respect - especially from their fellow humans.
If AGI developers cannot give their fellow humans that commitment or that
level of respect, then I think they demonstrate they are not safe parents
for growing AGIs!
  ***

  In other words, you are stating that only people who agree with your
personal ethics should be allowed to create AGI's -- your personal ethics
being that the preservation of humans is paramount.

  I think that the preservation of humans is very, very important -- but I'm
not willing to assert that it's absolutely paramount just to sound
"politically correct."

  ***
    I was actually rather disturbed by your statement towards the end of
your paper where you said: "In spite of my own affection for Voluntary
Joyous Growth, however, I have strong inclinations toward both the Joyous
Growth Guided Voluntarism and pure Joyous Growth variants as well." My
reading of this is that you would be prepared to inflict Joyous Growth
future on people whether they wanted it or not and even if this resulted in
the involuntary elimination of people or other sentients that somehow were
seen by the AGI or AGIs pursuing Joyous Growth as being an impediment in the
way of the achievement of joyous growth. If I've interpreted what you are
saying correctly that's pretty scary stuff!
  ***

  It seems you are oddly misinterpreting my statement here. I said that my
primary affection was for VOLUNTARY Joyous Growth, which is an ethical
principle that places *free choice* as a primary value.

  Free choice means not forcing humans to transcend, and not forcing humans
not to transcend.

  What you are advocating is a Joyous Growth Biased Voluntarism, in which AS
AN ABSOLUTE RULE no one is to be forced to transcend (or annihilated, or
forced to do anything). I think this is more problematic, but is also
worthy of consideration.

  * **

  I think the next step is to consider what values we would like AGIs to
hold in order for them to be sound citizens in a community of sentients. I
think the minimum that is needed is for them to have a tolerant, respectful,
compassionate, live-and-let-live attitude.
  ***

  It seems to me that you're just rephrasing what I call "Voluntary
Joyousity" here, in language that you like better for some reason.

  compasionate = valuing Joy of others
  tolerant, live-and-let-live = valuing others' ability to choose

  All you've left out is the "growth" part.

  If you prefer the verbiage of "compassionate and tolerant" as opposed to
"joy and choice", that's fine with me.... None of these English words
really captures what needs to be said exactly, anyway...

     ***
  I think AGIs that had a tolerant, respectful, compassionate, live-and-let-
live ethic would not intrude excessively on human society. They might, for
example, try to discourage female circumcision or even go so far as stopping
capital punishment in human societies (I can't see that these actions would
conform to the ethics that the AGIs were given [under my scenario] their
human creators/carers). As far as I can see I don't think that AGIs need to
have ported into them a sort of general digest of human-ness or even an
idiosyncratic (renormalised) essence of general humane-ness. I think we
should be able to be more transparent than that and to identify the key
ethical drivers that lead to tolerant, respectful, compassionate,
live-and-let-live behaviour.
  ***

  What's odd is that you seem to agree with me almost completely --- you
agree with me that Eliezer's idea of embodying humane-ness in AI's is
overcomplicated, and you agree that it's good to supply AGI's with general
ethical principles. The only difference is that you choose different words
to describe what I call Joy and Choice, and you appear not to value what I
call Growth enough to want to make it a basic value.

  * **

  I think these notions are sufficiently abstract to be able to pass your
test of being likely to "survive successive self-modification".
  ***

  Yes -- becuase they're the SAME as the notions I proposed, merely worded
in a way that evokes more pleasant associations for you...
  * **

  In your paper you suggest that we need AGIs to save humanity from our
destructive urges (applied via advanced technology). If having AGIs around
could increase the risk of humanity being wiped out to achieve a more
beautiful deployment of mass/energy then it might be a good idea to go back
and check to see just exactly how dangerous the other feared technologies
are. While nanotech and genetic engineering could produce some pretty
virulent and deadly entities I'm not sure that they are likely to be much
more destructive than bubonic plague, eboloa virus, small pox have been in
their time etc. There are a lot of people around so that even if these
threats killed millions? billions? they are unlikely to wipe out even most
people. So should we seek help from this scale of threat by creating
something that might arbitarily decide to wipe out the lot of us on a whim?
  ***

  I'm afraid you are being woefully naive on this particular topic. The
existential risks of MNT and bioweapons are very very real -- not today, but
within centuries for sure, and decades quite probably. But I don't have
time to trot out the arguments for this point tonight.

  -- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT