From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Jun 30 2002 - 13:41:53 MDT
> But you do, personally, have criteria of merit which you use to actually
> choose between moralities? A desirability metric is a way of choosing
> between futures. Do you have a "criterion of merit" that lets you
> choose between desirability metrics? What is it?
I'm not sure I get your question.
I have two ways of judging moral/ethical systems; roughly, they are as
sim(,) is a similarity measure
F(X) = sim(X,Y)
Y= "Ben's morality"
X = anothe rmoral system being judged
Let w(X) denote the set of probable worlds resulting if a population of
intelligent entities has moral system X
Let des(W,X) denote the desirability of world W according to moral system X
F(X) = des(w(X), Y)
[this can be made more precise-looking using probability distributions]
(I'm not trying to claim this is real math, just having fun... ;)
Criterion 2 is more fundamental, criterion 1 is an easier to compute proxy
to criterion 2...
> > However, it could nonetheless be the case that highly
> intelligent systems
> > tend toward certain moral systems, as opposed to others. Just as modern
> > technological culture tends toward different moral systems than tribal
> > culture....
> If *human* intelligent systems, but not necessarily all theoretically
> possible minds-in-general, tend toward certain moral systems as opposed
> to others, then would you deem it desirable to construct an AI such that
> it shared with humans the property of tending toward these certain moral
> systems as intelligence increased?
That is a tough nut of a question, Eliezer. I have thought about it before
and it's troublesome.
What is your view?
The real question is what is the world that will result from an AI having
moral system X, versus moral system Y.
If the probable worlds resulting from X are better (by my standards) than
the probable worlds resulting from Y, then I'll vote for X. (My criterion 2
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT