RE: FAI means no programmer-sensitive AI morality

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 30 2002 - 13:41:53 MDT

> But you do, personally, have criteria of merit which you use to actually
> choose between moralities? A desirability metric is a way of choosing
> between futures. Do you have a "criterion of merit" that lets you
> choose between desirability metrics? What is it?

I'm not sure I get your question.

I have two ways of judging moral/ethical systems; roughly, they are as
follows...

sim(,) is a similarity measure

1)
F(X) = sim(X,Y)

Y= "Ben's morality"
X = anothe rmoral system being judged

;->

2)

Let w(X) denote the set of probable worlds resulting if a population of
intelligent entities has moral system X

Let des(W,X) denote the desirability of world W according to moral system X

F(X) = des(w(X), Y)

[this can be made more precise-looking using probability distributions]

(I'm not trying to claim this is real math, just having fun... ;)

Criterion 2 is more fundamental, criterion 1 is an easier to compute proxy
to criterion 2...

>
> > However, it could nonetheless be the case that highly
> intelligent systems
> > tend toward certain moral systems, as opposed to others. Just as modern
> > technological culture tends toward different moral systems than tribal
> > culture....
>
> If *human* intelligent systems, but not necessarily all theoretically
> possible minds-in-general, tend toward certain moral systems as opposed
> to others, then would you deem it desirable to construct an AI such that
> it shared with humans the property of tending toward these certain moral
> systems as intelligence increased?

That is a tough nut of a question, Eliezer. I have thought about it before
and it's troublesome.