RE: Humane-ness

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Feb 17 2004 - 11:43:00 MST


And one more point

I said in my essay that “Be nice to humans” or “Obey your human masters” are
simply too concrete and low-level ethical prescriptions to be expected to
survive the Transcension.

However, I suggest that a highly complex and messy network of beliefs like
Eliezer’s “humane-ness” is insufficiently crisp, elegant and abstract to be
expected to survive the Transcension.

I still suspect that abstract principles like "Voluntary Joyous Growth" have
a greater chance of survival. Initially these are grounded in human
concepts and feelings -- in aspects of "humane-ness" -- as as the
Transcension proceeds they will gain other, related groundings.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Ben
> Goertzel
> Sent: Tuesday, February 17, 2004 1:35 PM
> To: sl4@sl4.org
> Subject: RE: Humane-ness
>
>
>
> following up...
>
> So, in sum, the difficulties with Humane AI *as I understand it* are
>
> 1. The difficulty of defining humane-ness
> 2. The presence of delusions that I judge ethically undesirable, in the
> near-consensus worldview of humanity
>
> The second point here may seem bizarrely egomaniacal – who am I
> to judge the
> vast mass of humanity as being ethically wrong on major points?
> And yet, it
> has to be observed that the vast mass of humanity has shifted its ethical
> beliefs many times over history. At many points in history, the vast mass
> of humans believed slavery was ethical, for instance. Now, you
> could argue
> that if they’d had enough information, and carried out enough
> discussion and
> deliberation, they might have decided it was bad. Perhaps this
> is the case.
> But to lead the human race through a process of discussion,
> deliberation and
> discovery adequate to free it from its collective delusions –
> this is a very
> large task. I see no evidence that any existing political
> institution is up
> to this task. Perhaps an AGI could carry out this process – but then what
> is the goal system of this AGI? Do we begin this goal system with the
> current ethical systems of the human race – as Eliezer seems to suggest in
> the quote I gave (“Human nature is not a bad place to start…”)? In that
> case, does the AGI begin by believing in God and reincarnation, which are
> beliefs of the vast majority of humans? Or does the AGI begin with some
> other guiding principle, such as Voluntary Joyous Growth? My
> hypothesis is
> that an AGI beginning with Voluntary Joyous Growth as a guiding
> principle is
> more likely to help humanity along a path of increasing wisdom and
> humane-ness than an AGI beginning with current human nature as a guiding
> principle.
>
> One can posit, as a goal, the creation of a Humane AI that embodies
> humane-ness as discovered by humanity via interaction with an
> appropriately
> guided AGI. However, I’m not sure what this adds, beyond what
> one gets from
> creating an AGI that follows the principle of Voluntary Joyous Growth and
> leaving it to interact with humanity. If the creation of the Humane AI is
> going to make humans happier, and going to help humans to grow,
> and going to
> be something that humans choose, then the Voluntary Joyous Growth
> based AGI
> is going to choose it anyway. On the other hand, maybe after
> humans become
> wiser, they’ll realize that the creation of an AGI embodying the
> average of
> human wishes is not such a great goal anyway. As an alternative,
> perhaps a
> host of different AGI’s will be created, embodying different aspects of
> human nature and humane-ness, and allowed to evolve radically in different
> directions.
>
> -- Ben G
>
> > -----Original Message-----
> > From: Ben Goertzel [mailto:ben@goertzel.org]
> > Sent: Tuesday, February 17, 2004 12:53 PM
> > To: sl4@sl4.org
> > Subject: Humane-ness
> >
> >
> >
> > Eliezer,
> >
> > Trolling the Net briefly, I found this quote from you (from the
> > WTA list in Aug. 2003):
> >
> > ***
> > The important thing is not to be human but to be humane. ...
> >
> > Though we might wish to believe that Hitler was an inhuman
> > monster, he was, in fact, a human monster; and Gandhi is noted
> > not for being remarkably human but for being remarkably humane.
> > The attributes of our species are not exempt from ethical
> > examination in virtue of being "natural" or "human". Some human
> > attributes, such as empathy and a sense of fairness, are
> > positive; others, such as a tendency toward tribalism or
> > groupishness, have left deep scars on human history. If there is
> > value in being human, it comes, not from being "normal" or
> > "natural", but from having within us the raw material for
> > humaneness: compassion, a sense of humor, curiosity, the wish to
> > be a better person. Trying to preserve "humanness", rather than
> > cultivating humaneness, would idolize the bad along with the
> > good. One might say that if "human" is what we are, then "humane"
> > is what we, as humans, wish we were. Human nature is not a bad
> > place to start that journey, but we can't fulfill that potential
> > if we reject any progress past the starting point.
> > ***
> >
> > If the goal of your "Friendly AI" project is to create an AI that
> > is "humane" in this sense, then perhaps "Humane AI" would be a
> > better name for the project...
> >
> > I have a few comments here.
> >
> > 1)
> > I am not sure that humane-ness, in the sense that you propose, is
> > a well-defined concept. Doesn't the specific set of properties
> > called "humaneness" you get depend on the specific algorithm that
> > you use to sum together the wishes of various individuals in the
> > world? If so, then how do you propose to choose among the
> > different algorithms?
> >
> > 2)
> > How do you propose to distinguish the "positive" from the
> > "negative" aspects of human nature ... e.g. compassion versus
> > tribalism? I guess you want to distinguish these by a kind of
> > near-consensus process -- e.g. you're hoping that most people, on
> > careful consideration and discussion, will agree that tribalism
> > although humanly universal, isn't good? I'm not so confident
> > that people's "wishes regarding what they were" are good ones...
> > (which is another way of saying: I think my own ethic differs
> > considerably from the mean of humanity's)
> >
> >
> > Do you propose to evaluate
> >
> > P(X is humane) = P(X is considered good by H after careful
> > reflection and discussion | H is human)
> >
> > I guess you're thinking of something more complicated along these
> > lines (?)
> >
> > One runs into serious issues with cultural and individual
> > relativity here.
> >
> > For instance, the vast majority of humans believe that
> >
> > "Belief in God"
> >
> > is a good and important aspect of human nature. Thus, it seems
> > to me, "Belief in God" should be considered humane according to
> > your definition -- it's part of what we humans are, AND, part of
> > what we humans wish we were.
> >
> > Nevertheless, I think that belief in God -- though it has some
> > valuable spiritual intuitions at its core -- basically sucks.
> > Thus, I consider it MY moral responsibilty to work so that belief
> > in God is NOT projected beyond the human race into any AGI's we
> > may create. Unless (and I really doubt it) it's shown that the
> > only way to achieve other valuable things is to create an AGi
> > that's deluded in this way.
> >
> > Of course, there are many other examples besides "belief in God"
> > that could be used to illustrate this point.
> >
> > You could try to define humaneness as something like "What humans
> > WOULD wish they were, if they were wiser humans" -- but we humans
> > are fucking UNwise creatures, and this is really quite essential
> > to our humanity... and of course, defining this requires some
> > ethical or metaethical standard beyond what humans are or wish
> they were.
> >
> > ??
> >
> > -- Ben G
> >
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT