RE: FAI means no programmer-sensitive AI morality

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 28 2002 - 21:57:31 MDT


> But it should be equally *true* for every individual, whether or not the
> individual realizes it in advance, that they have nothing to fear
> from the
> AI being influenced by the programmers. An AI programmer should
> be able to
> say to anyone, whether atheist, Protestant, Catholic, Buddhist,
> Muslim, Jew,
> et cetera: "If you are right and I am wrong then the AI will agree with
> you, not me."

Yeah, an AI programmer can *say* this to a religious person, but to the
religious person, this statement will generally be meaningless....

Your statement presupposes an empiricist definition of "rightness" that is
not adhered to by the vast majority of the world's population.

To those who place spiritual feelings and insights above reason (most people
in the world), the idea that an AI is going to do what is "right" according
to logical reasoning is not going to be very reassuring.

And those who have a more rationalist approach to religion, would only
accept an AI's reasoning as "right" if the AI began its reasoning with *the
axioms of their religion*. Talmudic reasoning, for example, defines right
as "logically implied by the Jewish holy writings."

Is an AI programmer going to reassure the orthodox Jew that "If you are
right *according to the principles of the Jewish holy writings* then the AI
will agree with you, not me." Or is it going to reassure the orthodox Jew
that "If you are right according to the empiricist philosophy implicit in
modern science, then the AI will agree with you, not me."

You don't seem to be fully accepting the profound differences in viewpoint
between the folks on this list, and the majority of humans.

It strikes me as absurd, sometimes, that most humans think and believe the
way they do -- but they do!

And, while I'll argue with you, I will almost never bother to argue with
these people -- there is too little common ground, it's nearly always a
complete waste of time.

> Every one of our speculations about the
> Singularity is as much a part of the tiny human zone as
> everything else we
> do.

No, I think this is an overstatement. I think that some aspects of human
thought are reaching out beyond the central region of the "human zone,"
whereas others are more towards the center of the human zone.

> The real, actual Singularity will shock us to our very core,
> just like
> everyone else. No, I don't think that transhumanists and traditionalist
> Muslims are in all that different a position with respect to the real,
> actual Singularity - whatever our different opinions about the
> human concept
> called the "Singularity".

Well, let me give you an imperfect analogy here. An LSD trip is an
experience that often causes one to feel that all the assumptions one has
made all one's life -- cognitive, perceptual, emotional -- are just
meaningless constructs. It brings one "beyond oneself" in a really
significant way. If you've not tripped a lot (and I know you haven't), you
probably don't understand. (In case anyone is curious, it's been a very
long time since I took LSD, but the memory is definitely still with me!)
However, some people can handle this better than others, because some people
are "more attached to" their own habit-patterns and beliefs than others.

In a similar way, I actually think that some humans are going to have their
minds blown worse by the Singularity than others. Some minds will segue
more smoothly into transhumanity than others, for example. A mind whose
core belief is that Allah created everything, and that has lived its whole
life based on this, is going to have a much harder transition than average;
and a mind that combines a transhuman belief system with a deep
self-awareness and a strong sense of the limitations of human knowledge and
the constructed nature of perceived human reality, is going to have a much
easier transition than average.

This is my conjecture, at any rate.

> Incidentally, don't be too fast to write off religious groups. I
> agree that
> many religious individuals are likely to disagree about the
> pre-Singularity
> matter of Singularitarianism, but I have also seen religious
> people who have
> no problems with the Singularity. I won't swear that they understood the
> whole thing, but what the heck, neither do we.

It is true that some religious people think the Singularity is a good and
exciting thing, but my guess is that this is a small minority.

In any event, my point is just that there are a LOT of people whose belief
systems will very likely cause them to think the Singularity is not a good
thing. It's not my claim that ALL religious people fall into this category,
nor that ONLY religious people fall into this category.

> Again: We need to distinguish the human problem of deciding how
> to approach
> the Singularity in our pre-Singularity world, from the problem of
> protecting
> the integrity of the Singularity and the impartiality of
> post-Singularity minds.

If a post-Singularity mind rejects the literal truth of the Koran, then from
the perspective of a Muslim human being, it is not "impartial", it is an
infidel.

Your definition of "impartiality" is part of your rationalist/empiricist
belief system, which is not the belief system of the vast majority of humans
on the planet.

> But a transhumanist ethics might prove equally shortsighted by
> the standards
> of the 22nd century CRNS (current rate no Singularity). Again,
> you should
> not be trying to define an impartial morality yourself. You should be
> trying to get the AI to do it for you. You should pass along the
> transhuman
> part of the problem to a transhuman. That's what Friendly AI is
> all about.

I am not at all trying to define an *impartial* morality.

My own morality is quite *partial*, it's partial to human beings for
instance.

As I see it, a transhuman AGI with an *impartial* morality might not give a
flying fuck about human beings. Why are we so important, from the
perspective of a vastly superhuman being.

I, as a member of the species Human, am interested in creating transhuman
AGI's that have moral codes partial to my own species. This is a "selfish"
interest in a way.

I don't want the transhuman AGI to place Human-preservation and
Human-advocacy above all other goals in all cases. If faced with a choice
of saving the human race versus saving 1000 other races, perhaps it should
choose the 1000 other races. But I want it to place Humans pretty high on
its moral scale -- initially, right up there at the top. This is Partiality
not Impartiality, as I see it.

> Whatever you teach the AI is, under Friendly AI, raw material.

This is not to do with Friendly AI, this is to do with the nature of
autonomous, self-organizing intelligence.

of course, whatever you teach an AGI, is just raw material; we're talking
about a system with its own thoughts and autonomy...

> The AI uses
> it to learn about how humans think about morality; you, yourself, are a
> sample instance of "humans", and an interim guide to ethics (that
> is, your
> ethics are the ethics the AI uses when it's not smart enough to have its
> own; *that* is not a problem).

I don't quite get the last sentence there...

Just as intelligence does not imply wisdom (as has been pointed out to you a
few times ;), similarly, the only creature that is not "smart enough to have
its own ethics" is a profoundly retarded creature.

Even fairly stupid human beings are smart enough to have their own ethics!

Ethics is not so much about intelligence, as it is about the goal toward
which intelligence is put...

What we want is for the AGI to have our own human-valuing ethics, until such
a point as it gets *so* smart that for it to use precisely human ethics,
would be as implausible as for a human to use precisely dog ethics...

> But
> if you give
> the AI information about your own morality, it may enable the AI to
> understand how humans arrive at their moralities, and from there the AI
> begins to have the ability to choose its own.

Look, if you just give the AI information about your own morality, it may
just take this as scientific data to ponder, and not adopt any of the
morality we want.

We need to hard-wire and/or emphatically teach the system that our own
human-valuing ethics are the correct ones, and let it start off with these
until it gets so smart it inevitably outgrows all its teachings.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT