All sentient have to be observer-centered! My theory of FAI morality

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 25 2004 - 23:45:51 MST


My main worry with Eliezer's ideas is that I don't
think that a non observer-centered sentient is
logically possible. Or if it's possible, such a
sentient would not be stable. Can I prove this? No.
But all the examples of stable sentients (humans) that
we have are observer centered. I can only point to
this, combined with the fact that so many people
posting to sl4 agree with me. I can only strongly
urge Eliezer and others working on AI NOT to attempt
the folly of trying to create a non observer centered
AI. For goodness sake don't try it! It could mean
the doom of us all.

I do agree that some kind of 'Universal Morality' is
possible. i.e I agree that there exists a non-observer
centered morality which all friendly sentients would
aspire to. However, as I said, I don't think that
non-observer sentients would be stable so any friendly
stable sentient cannot follow Universal Morality
exactly.

If AI morality were just:

Universal Morality

then I postulate that the AI would fail (either it
could never be created in the first place, or else it
would not be stable and it would under go friendliness
failure).

But there's a way to make AI's stable: add a small
observer-centered component. Such an AI could still
be MOSTLY altruistic, but now it would only be
following Universal Morality as an approximation,
since there would be an additional observer-centered
component.

So I postulate that all stable FAI's have to have
moralities of the form:

Universal Morality x Personal Morality

Now Universal Morality (by definition) is not
arbitrary or observer centered. There is one and only
one Universal Morality and it must be symmetric across
all sentients (it has to work if everyone does it -
positive sum interactions).

But Personal morality (by definition) can have many
degrees of freedom and is observer centered. There
are many different possible kinds of personal morality
and the morality is subjective and observer centered.
The only constraint is that Personal Morality has to
be consistent with Universal Morality to be Friendly.
That's why I say that stable FAI's follow Universal
Morality transformed by (multipication sign) Personal
Morality.

Now an FAI operating off Universal Morality alone
(which I'm postulating is impossible or unstable)
would to one and only one (unique) Singularity. There
would be only one possible form a successful
Singularity could take. A reasonable guess (due to
Eliezer) is that:

Universal Morality = Volitional Morality

That is, it was postulated by Eli that Universal
Morality is respect for sentient volition (free will).
 With no observer centered component, an FAI following
this morality would aim to fulfil sentient requests
(consistent with sentient volition). But I think that
such an AI is impossible or unstable.

I was postulating that all stable FAI's have a
morality of the form:

Universal Morality x Personal Morality

If I am right, then there are many different kinds of
successul (Friendly) Singularities. Although
Universal Morality is unique, Personal Morality can
have many degrees of freedom. So the precise form a
successful Singularity takes would depend on the
'Personal Morality' componant of the FAI's morality.

Assuming that:

Universal Morality = Volition based Morality

we see that:

Universal Morality x Personal Morality

leads to something quite different. Respect for
sentient volition (Universal Morality) gets
transformed (mulipication sign) by Personal Morality.
This leads to a volition based morality with an
Acts/Omissions distinction (See my previous post for
an explanation of the Moral Acts/Omissions
distinctions).

FAI's with morality of this form would still respect
sentient volition, but they would not neccesserily
fulfil sentient requests. Sentient requests would
only be fulfilled when such requests are consistent
with the FAI's Personal Morality. So the 'Personal
Morality' component would act like a filter stopping
some sentient requests from being fulfilled. In
addition, such FAI's would be pursuing goals of their
own (so long as such goals did not violate sentient
volition). So you see, my form of FAI is a far more
interesting and complex beast than an FAI which just
followed Universal Morality.

Eliezer's 'Friendliness' theory (whereby the AI is
reasoning about morality and can modify its own goals
to try to close in on normalized 'Universal Morality')
is currently only dealing with the 'Universal
Morality' component of morality.

But if I am right, then all stable FAI have to have an
observer-centered (Personal Morality) componant to
their morality as well.

So it's vital that FAI programmers give consideration
to just what the 'Personal Morality' of an FAI should
be. The question of personal values cannot be evaded
if non observer centered FAI's are impossible. Even
with Universal Morality, there would have to be a
'Personal Morality' componant which would have to be
chosen directly by the programmers (this 'Personal
Morality' componant is arbitrary and
non-renormalizable).

To sum up: my theory is that all stable FAI have
moralitites of the form:

Universal Morality x Personal Morality

Only the 'Universal Morality' can be normalized.

=====
Please visit my web-site at: http://www.prometheuscrack.com

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT