RE: Positive Transcension 2

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Feb 20 2004 - 09:41:22 MST


Philip,

OK -- You win!!

I think you made a lot of wrong points in your response to my essay, but you
made one VERY important point which -- though it occurred to me before --
was nowhere near prominent enough in my mind when I wrote the Positive
Transcension essay...

My article was oriented toward the open exploration of ideas and
possibilities -- BUT some of these ideas are too shocking for most people to
deal with. This is in line with Eliezer's "shock level" idea that gave this
email list its name.

The plus of discussing things openly on a list like this is that I get good
feedback from smart, interested people who have also thought about these
issues.

The minus is that an e-mail trail is left, that conceivably could cause
trouble among other humans who don't share the common conceptual mindset of
the transhumanist community...

Your good point is that

IF launching a Transcension of type Y is the best strategy according to
Ethical System E

AND the odds of successfully launching a Transcension are a lot higher with
the acceptance of a greater number of humans

THEN it is worth exploring whether either

a) a Transcension of type Y is acceptable to the vast majority of humans, or
if not whether
b) there is a Transcension of type Y' that is also a very good outcome
according to E, but that IS acceptable a lot more humans

If such a Transcension Y' is found, then it's a lot better to pursue Y' than
Y, because the odds of achieving Y' are significantly greater.

If

Y = a Transcension supporting Voluntary Joyous Growth

and

Y' = a Transcension supporting Voluntary Joyous Growth, but making every
possible effort to enable all humans to continue to have the opportunity to
live life on Earth as-is, if they wish to

then it may well be that the conditions of the above are met.

I think you overestimate the extent to which Y' is acceptable to the vast
mass of humans. After all, as I said, if people will outlaw hallucinogens
and stem cell research and require government approval for putting chips in
one's own brain -- and plague Alcor with endless lawsuits -- then it's naive
to think people won't stand in the way of the Transcension.

But, definitely Y' is easier to sell than Y, and will create LESS
opposition, thus increasing odds of achievement.

However, this doesn't get around my skepticism as to the possibility of
guaranteeing that "all humans [will] continue to have the opportunity to
live life on Earth as-is, if they wish to." The problem is, I think it is
not very easy to make this guarantee about post-Transcension dynamics. If
I'm right, then the options come down to,

a) Lie about it, and convince people that they CAN have this guarantee after
all, or

b) [Try to] convince people that the risk is acceptable given the rewards
and the other risks at play

c) Launch a Transcension against most peoples' will

So, ethically, the best hope is that through a systematic process of
education, the majority of humans will come to realization b) ... that
although there are no guarantees, the rewards are worth the risks. Then
democracy is satisfied, growth is satisfied, etc. etc. But this kind of
education is going to prove very hard to do -- though for sure a very very
worthwhile endeavor...

-- Ben

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Philip
> Sutton
> Sent: Friday, February 20, 2004 11:48 AM
> To: sl4@sl4.org
> Subject: RE: Positive Transcension 2
>
>
> Hi Ben,
>
> My guess is that, optimistically, it is going to be a decade or
> two before
> we see AGIs capabable of driving a fast take-off to
> transcension/singularity or whatever. During this time the AGIs
> together with their human designers/ programmer/ trainers/ educators /
> etc. are going to have to co-exisit with the rest of humanity - so that
> resources can be devoted to the support of AGI (computing
> power/design&programming skill etc.) and the simple right to continue
> with the work/to have AGIs operating is granted by society.
>
> If during this time people come to fear AGIs (or their potential) they
> may engage in all sorts of blocking activities (legal/direct
> action etc.) of
> a more or less extreme nature. Also people developing AGIs (and the
> AGIs themselves) will need supporters to defend AGI development so
> that the work and the AGIs can continue.
>
> If AGI promoters are projecting vibes that they are not 100% behind the
> protection/welfare of humans and that somehow AGIs might be
> engaged in the 'demise of humanity' then some people might get a bit
> jumpy and might flip into active opposition mode - and you have to
> admit - if they did it would not be surprising!
>
> Ben, I really admire the way that you have been openly exploring a
> huge number of the issues related to the development of AGI via the
> email lists and in other ways - so I would hate to see your style, or
> anyone else's, cramped by a need for formulaic 'political correctness'.
> But I think you need to keep putting yourself in the shoes of other
> people who are not closely involved in the development of AGI - you've
> got to be able to feel what they might feel (after the style of the
> universal mind simulator! :)
>
> Evolution (even the radical-leap forward-variety represented for
> example by the first trilobite with eyes, the first human with advanced
> language and the first AGI with competent self-upgrading skills)
> is still a
> game of making steps where *each one* is viable so that future
> potential can survive the present moment to be able to unfold later.
>
> I think the safest way to get to AGIs with competently self-upgrading
> skills going is to make a compact with humanity (all of them/us - even
> including the vast mass of ignorant people!) that AGIs will be
> developed in a way that does not violate people's desire for continued
> exisitence and desire for autonomy over the nature of their lives. If
> AGIs are designed/ trained so that they do not threaten the exisitence/
> autonomy desires of people then I think there is a much better chance
> that enough people will support or tolerate the creation of AGIs so that
> AGIs actually emerge and persist long enough to be able to look after
> themselves and be able to assure their own survival.
>
> I'm NOT pushing for political correctness - I'm pushing for a politically-
> savvy and compassionately-sensitive approach to co-exisitence
> between the AGI developers/AGI and the rest of humanity.
>
> As I mentioned a moment ago, I admire your intellectual openness -
> and I'm not saying this to suck up to you. It's how I feel.
>
> But I think you have to be very sensitive to the feelings of
> others - your
> "Encouraging positive transcension" article has a major pre-occupation
> with the notion of the demise of humanity as a conceivable outcome of
> the development of AGI - and in saying this I'm not saying that you are
> actually advocating the demise of humanity. But run the text through
> the Novamente inference engine and see if I'm wrong about the
> preoccupation!
>
> I think this line-of-thought/this outcome (the demise of humanity
> through either evolution to something else or through reassignment of
> mass energy(!!)) is simply not necessary to the development of AGI or
> the wonderous flowering of the universe. Once AGIs have access to
> the physical environment, production processes and transport -
> especially if they can access places off-earth - they will not be 'held
> back' by people. But leading up to this stage, AGIs and their human
> designers/developers/trainers etc. could be held back or totally blocked
> if an anti-AGI panic set in.
>
> That is why I think humans working on AGI development and (later the
> AGIs themselves) need to make (and honour) a pact with humanity that
> the AGIs will not threaten humans exisitence and lifestyle autonomy.
>
> If the AGIs can also help humanity to solve our many current problems
> so much the better - then there will be a sound basis for mutal
> recognition and cooperation.
>
> This humanity-AGI pact is a different thing from the problem of creating
> 'friendly' ethics in AGI. I have no skills in AGI design or development
> but your intuition that 'friendliness' will be easier to implant
> and retain
> through massive cycles of self-modification if it is more all-
> encompassing or more generally stated makes sense to me. So I'm
> suggesting that we need TWO processes:
>
> 1. building in meta-friendliness towards all sentients or even to all
> life that unfailingly generates, amongst other responses,
> friendliness towards humans.
>
> 2. a conscious pact with humanity that AGIs with respect humans desire
> to exist and have lifestyle autonomy/self-determination.
>
> This pact is necessary in my view to reassure people that tolerating or
> supporting the emergence of AGIs in not a threat to themselves or their
> children or humanity in general.
>
> If as a consequence of the freedom protected under this this pact,
> some or all people decide to turn their back on the possibilities of
> transcension *for themselves* it doesn't matter in the wider sweep of
> cosmic history. Personally I think a large number of people *will* avail
> themselves of the benefits of transcension - if for no other reason than
> to extend their lives. Many people will also be excited at the prospect
> of communicating with advanced AGIs and many people will want to be
> able to personally tap the wonders of intellectual expansion that come
> from augmentation or uploading.
>
> But if other people choose not to go this way, if they even wanted to
> stay exactly as they are, it doesn't matter two hoots - it should
> be their
> choice. I don't think AGI developers/promoters should project one iota
> of concern about people making such a conservative choice.
>
> I think the AGI-human pact could start off as a one way offer - offered
> by the AGI development community freely and unilaterally to the rest of
> the community. If the heat builds up on AGI development then it might
> be necessary later to make a formal two-way pact via the formal
> political processes operating around the globe at the time.
>
> Can I clarify - the whole of the foregoing is NOT premised on the
> simple notion that the preservation of humanity is paramount over
> anything else in the universe. What I'm saying is more prosaic than
> that.......if you want the least-hassle path to the creation of self-
> upgrading AGI then my intuition is that this is best facilitated by an
> historical pact that AGIs will be designed so that as an emergent of
> their ethics they are unfailingly friendly to people - allowing humans to
> continue to exist and shape their own lifestyles. I do not see humans
> as the centre of the universe. I don't care personally if they
> evolve into
> something that I would not recognise as being like humanity-2004 style.
> But I just want personally and for my children and for their descendents
> and for other people and their descendents the freedom to exist and
> shape our lifestyles. My guess also is that if there are other sentients
> around the universe many of them would also be likely to want this sort
> of freedom.
>
> -------
>
> Ben, it seems to me that your favoured ethical structure is that all (?)
> AGIs should embody a core ethical commitment to "voluntary joyous
> growth" or some variant of this?
>
> My feeling is that we should allow for a greater diversity of prime goals
> than this and that if there is a greater diversity then it is
> necessary to
> build in friendliness as an unfailing companion of whatever other goals
> the AGIs might have.
>
> I think it would be relatively easy to get concensus amongst people that
> AGIs (should they exist) should be friendly to people. But beyond that
> human consensus will be hard to find - even on the SL4 list or the AGI
> list. I don't see this as a problem. There will be enough people
> involved in AGI development who are motivated to see the wonderous
> complexity and patterns of the universe unfold and so I'm sure they will
> ensure that a fair number of AGIs are motivated to pursue "voluntary
> joyous growth". But there are lots of other
> (friendliness-friendly) goals
> that could motivate people and AGIs and I think the outcome of AGI
> development will be even more beneficial overall if there is a diversity
> of goals among AGIs.
>
> It seems to me that you often write Ben as if you think in terms of there
> being only one or a few AGIs. If I'm right about this, then I
> suspect this
> mindset tends to lead you and perhaps others to think that there might
> be only one best AGI goal set. My own default position is to imagine
> that there will be lots of AGIs (with lots of different origins) and that
> their goal sets could be quite divergent.
>
> I think it is safest and most practical to start with a presumption that
> there will be a diversity of AGIs with a diversity of cognitive
> architectures and a diversity of goal sets.
>
> So I'm interested in the resulting population or group dynamics that
> results from interactions between people and AGIs, and AGIs and
> AGIs. I think some of the outcomes we hope for from AGIs should be
> sought as emergent properties arising from a diverse population of
> AGIs working with a diverse population of people.
>
> The one common feature that I think is needed for *all* AGIs
> *individually* (and also all people!) is an ethic of co-
> existence/friendliness or whatever we choose to call it.
>
> Cheers, Philip
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT