RE: FAI means no programmer-sensitive AI morality

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 18:17:13 MDT


hi,

>
> Uh... Ben, unless you spent 4 hours a day during your first twelve years
> being indoctrinated in a fundamentalist religion...

No, but I've lived with a profoundly religious person for the last 18 years.
So we may have to call it a draw...

>
> No offense, Ben, but I probably have a much better picture of how a
> fundamentalist Jew *actually thinks* than you do.

I'm sure you do. that's because my exposure to religion is based more on my
wife and her Zen Buddhist friends, and a close friend who is a Sufi ... I
don't have much experience with Judaism except the Reform style....

> In my experience religious people argue just like other
> people,

Eli -- That's just Jewish people ;-D

> I think you are confusing the principles which people verbally adhere to
> with the way that people actually think.

I don't think so, I think I know how my wife and my Sufi friend think fairly
well...

> I'm curious, Ben, have you ever
> actually *been* religious?

Sort of. I considered myself a Zen Buddhist for a while, and then drifted
away from it...

> Do you know how religious thinking works *in
> the first person*?

I know how Zen works in the first person. The Zen text I loved most was
"The Zen Teachings of Huang Po," which is primarily focused on "stopping
thought" and ridding the mind of all thoughts, all logic, all reason, all
ego, all ideas, and accepting that nothing is either real or unreal. For a
while I associated with others into this stuff, meditated in a group, etc.

> Because I have to say you're sounding like a
> complete outsider here - like your idea of religion comes from watching
> scientists debating theologians about the nature of truth - which is a
> very different thing from how ordinary religious people actually think
> in practice. I'm sure that you've had chats with your semireligious
> parents and your semireligious wife and so on,

My wife is a Zen priest, she's far more than semireligious!

I think the disconnect here is somewhat a "Zen vs. fundamentalist Jew"
thing.. these two religious perspectives are pretty different...

> and maybe read a few
> books, but you may need to consider that standing back as a scientist
> and going "Gosh, how *utterly alien* and *unempirical*" is going to give
> you a different perspective, and one which is maybe a bit unrealistic
> about the way religious people talk to each other when they're not
> debating a scientist or whatever.

Zen isn't really focused on talking or debating at all. Words are not
perceived as very meaningful. They're used only to lead you beyond words,
and sparingly. The beginning of Zen was the "wordless transmission" ...

>
> The idea that religion and rationality are orthogonal is a modern idea
> proposed by modern theologians;

Actually, it is there very, very clearly in "The Zen Teachings of Huang Po"
from 800AD or so.

> Anyway, let's keep this conversation focused. Ben, is it your assertion
> that even if the Jewish or Buddhist religion were correct, this would
> not be apparent to a Friendly AI that had been programmed by atheists?
> Because *I* would most certainly regard this as a bug.

I don't think that Zen is the sort of thing that can be correct or
incorrect. It's a different sort of thing than that.

It just is.

So, I guess, speaking from the Zen Buddhist in me, I reject your question as
being irrelevant to Zen, and being part of the samsaric world. If you were
here I'd just have to hit you with a stick and jolt you to enlightenment ;>

> But just in case *we* happen to be the
> ones who are in fact horribly, fundamentally wrong, whether or not any
> current human is right, we need to make sure that the AI is not bound to
> our mistakes.

This kind of right vs. wrong, dualistic thinking is antithetical to Zen.

And so is the seeking, grasping nature of the whole AGI pursuit. Zen is
about being contented with what is, not about constantly striving to create
a whole new order. It teaches compassion, but simple compassion in each
moment, not compassion via building thinking machines to change the world.
If the Zen Buddhist in me were dominant, I wouldn't be working on AGI, I'd
be sitting and meditating, walking thru the woods, and helping the needy
directly.

> First comes the question of what is true.

In Zen as I practiced it, the idea of "true" was itself an illusion to be
overcome...

> No human thought is outside the correspondence theory of truth.

Your original statement talked about correspondence with external reality.
if you modify it to include correspondence with internal reality, then I am
closer to agreeing with you...

> Now it may be that Zen proceeds from arational thoughts to an arational
> conclusion which is important not because it corresponds to some outside
> thing but because it is itself, and in this sense the core of Zen may
> come closer to being outside the correspondence theory of truth than
> anything else I know of, but it is surrounded by a core of mystical
> tradition which, like all forms of human storytelling, makes use of the
> correspondence theory of truth.

That is why I left Zen, I couldn't stomach the more mystical and mythical
aspects of it, although I still "believe" and "practice" the core of it...

> > The thing is that my wife, a fairly rational person and a
> Buddhist, would
> > not accept the statement "If you assume that Buddhism is the correct
> > religion, then a Friendly AI would be Buddhist."
>
> Sounds like a testable statement. Would you care to put it to the test?

I asked her. She didn't want to answer ;)

> Ben, imagine what kind of precautions you would ask a Catholic
> programming a Friendly AI to take in order to ensure that the AI would
> eventually convert to atheism, given that atheism is correct. Now do
> that yourself. What does this have to do with "scientific rationalism"?

The two cases are very different.

Atheism is a conclusion that a mind can be reasonably hoped to conclude
based on observation of the external world, whereas Catholicism is not -- it
is something a mind can only be hoped to conclude via internal experience,
and instruction by others.

So the two cases are very different. To make a mind that could start out
Catholic but become atheist, it would suffice to make a mind that could
revise its own beliefs based on observation. To make a mind that could
plausibly start out atheist but become Catholic, one would have to guarantee
that

a) the mind were instructed in Catholicism at some point
b) the mind were built to have similar spiritual experiences to humans

This asks a lot more. Atheism is not anthropomorphic or human-culture-bound
so an AI can naturally be expected to happen upon it as a possible attitude.
Catholicism is highly anthropomorphic, so it would take a lot of work to
make a nonhuman system have spiritual experiences consistent with the
"father son and holy ghost" meme, etc.

> I think that asking how to ensure that an AI created by atheists would
> converge to a religion, given that this religion is correct, is a
> necessary exercise for understanding how an AI can repair whatever deep
> flaws may very well exist in our own worldviews. In this sense, I think
> that refusing to put yourself in the shoes of a Christian building an AI
> and asking what would be "fair" is not just a matter of pre-Singularity
> politics. It is a test - and not all that stringent a test, at that -
> of an AI's ability to transcend the mistakes of its programmers. If you
> don't want to apply this test, what are you going to use instead?

it's not impossible that a Novamente could decide the "father, son and holy
ghost" were the real truth underlying the universe, but it's incredibly
unlikely since Novababy will have neither father nor son... these concepts
will not be at all natural to it...

> I view it as an unbearably horrifying possibility that the next billion
> years of humanity's existence may be substantially different depending
> on whether the first AGI was raised by an environmentalist. It's
> equally horrifying whether you're an environmentalist looking at Eliezer
> or vice versa. It shouldn't depend on who happens to build the AI, it
> should depend on *who's right*.

You seem to have this idea that there is some kind of "meta-rightness"
standard by which different ethical standards can be judged more or less
correct.

There is no such thing.

And, I think it's pretty obvious that the outcome of the next billion years
MAY depend on the initial conditions with which the Singularity is launched.
Complex systems often display a sensitive dependence on initial conditions.
Along with a tendency to fall into certain general attractors regardless of
initial conditions. The details of the future will probably depend on the
details of the Singularity's launch -- and whether humans or trees continue
to exist must be considered "details" from a post-Singularity perspective.

> If nobody's right then the choice
> should be kicked back to the individual. If there's no way to do that
> then you might as well take a majority vote of the existing humans or
> pick a choice that's as good as any other; in absolutely no case should
> the programmers occupy a priviliged position with respect to an AI that
> may end up carrying the weight of the Singularity.

Sorry, but if there's an arbitrary choice to be made (and there is), and I'm
in a position of some control, I'm going to make the decision based on the
input of individuals I respect rather than based on random selection or
majority vote.

Frankly, I don't have that much respect for the views of the majority of
humans on these very subtle issues. Sorry if that's too arrogant.

> Develop its own ideas from where? How? Why? Every physical event has
> a physical cause. There are causes for humans developing their own
> ideas as they grow up, most of them evolved. You are standing not only
> "in loco parentis" but "in loco evolution" to your AI. What causes will
> you give Novamente to develop its own ideas?

There is an inbuilt initial goal which reinforces this behavior, actually.

> > I do have a selfish interest here: I want me and the rest of the human
> > species to continue to exist. I want this *separately* from my
> desire for
> > sentience and life generally to flourish. And I intend to embed this
> > species-selfish interest into my AGI to whatever extent is possible.
>
> Ben, to the best of my ability to tell, the abilities an AI would use to
> grow beyond its programmers' and the abilities an AI would use to
> correct horrifying errors by its programmers are exactly the same
> structurally.

Sure.

> Your "pseudo-selfish" attitude here - i.e, that it's okay
> to program an AI with altruism that is just yours - endangers the AI's
> possession of even that altruism.

But Eli, there is no "universal ethics" with which to program the AI.

You've suggested to choose the AI's ethics by majority vote, to get around
this problem.... I'd have to think about that one long and hard.

> Of course humans argue about everything. The question is which of these
> answers is *right*. If your answer is no righter than anyone else's
> then how dare you impose it on the Singularity? Why wouldn't anyone
> else in the world be justly outraged at such a thing? Letting everyone
> pick their own solutions whenever possible is one answer.

My answer is that ethical systems tell you what's right, and there is no
"meta-ethics" telling you which ethical system is right -- meta-ethics are
just ethics...

In other words, my ethical system tells me that my ethical system is righter
than others ;.> And most other ethical systems are in the same
self-referential position!

>
> > An AGI cannot be started off with generic human ethics because
> there aren't
> > any. Like it or not, it's got to be started out with some
> particular form
> > of human ethics. Gee, I'll choose something resembling mine rather than
> > Mary Baker Eddy's, because I don't want the AGI to think initially that
> > medical intervention in human illnesses is immoral...
>
> And you think these two positions are equally right?

No, *I* don't, but that's because my ethical system tells me that my ethical
system is right.

hers told her that hers was right...

PLEASE, articulate this mystical meta-ethic that allows one to determine
which ethical system is correct -- but that does not just become "yet
another ethical system" !!! Details please! God is in the details!

> > There is no rational way to decide which ethical system is "correct."
> > Rather, ethical systems DEFINE what is "correct" -- not based
> on reasoning
> > from any premises, just by decision.
>
> Hm. According to you, people sure do spend a lot of time arguing about
> things that they should just be deciding by fiat. In fact, everyone
> except a relative handful of cultural relativists - a tiny minority of
> humanity, in other words - seems to instinctively treat ethics as if it
> were governed directly by the correspondence theory of truth. Why is
> that, do you suppose?

I don't think that is true at all. Please explain in detail how you think
the correspondence theory of truth tells you what ethics is right.

My sister and wife think it's wrong to kill animals to eat them. I don't.
How does the correspondence theory of truth help decide this ethical
difference?

> Hm. It seems like you simultaneously believe:
>
> (a) there are correct answers for questions of simple fact and that any
> AGI should be able to easily outgrow programmer-supplied wrong answers
> for questions of simple fact
> (b) ethical questions are fundamentally different from questions of
> simple fact because no correct answers exist
> (c) an AGI should be able outgrow programmer-supplied ethics as easily
> as it outgrows programmer-supplied facts; in fact, this has nothing to
> do with Friendly AI but is simply a question of AI
>
> I can see how (a) (!b) (c) go together but not how (a) (b) (c) go
> together. If you assert (b) then the human ability to outgrow
> parentally inculcated ethics would depend on evolved functionality above
> and beyond generic rationality.

Yes, I assert a, b and c. And you are right, the cognitive dynamics
underlying "outgrowing initial ethics" will be a little different from those
involved in "outgrowing initial factual beliefs", at least in Novamente.
But there will be plenty of overlap too.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT