Universal Morality -Some specific ideas

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Tue Mar 02 2004 - 21:37:49 MST


Ben was asking what I thought the actual content of
Universal Morality might be. Well, I'll try to give a
quick sumamry.

I've already indicated that I'm skeptical of Eliezer's
ideas - Volitional Morality. 'Volitional Morality' as
I understand it, refers to a morality which equate
good with the fulfillment of other sentient desires,
subject to the privisos that the desires are what the
sentient truly wants, and the desires don't harm other
sentients. The problem with this can be seen if we do
a thought experiment and imagine that the whole
universe was filled with nothing but sentients
operating off Volitional Morality:

Each sentient wants to help others. So the goal
systems of the sentients look like this:

Sentient 1 : I want to help others
Sentient 2: I want to help others
Sentient 3: I want to help others
.... etc

Take Sentient 1. It wants to help sentient 2. But
sentient 2 wants to other sentients, who in turn want
to help other sentients, who in turn want to help
other sentients..

So the goal system of each sentient ends up in an
infinite regress:

Sentient 1: I want to help others to help others to
help others to help others to help others to help
others to help others ....

Volitional Morality is meaningless without some
additional goals on the part of sentients. Helping
others makes no sense if sentients don't have some
personal goals that they need help with.

This proves that Universal Morality is not equivalent
to Volitional Morality.

There's another argument against Volitional Morality:

Any real world FAI will not have infinite
computational resources. And the continued existence
of the goal system will not be certain. The FAI can't
help sentients if it's goal system ceases to exist.
The end of the goal system would contradict Volitional
Morality (because dead FAI's can't help anyone).
Therefore all FAI's would have to divert some of their
finite resources into maintaining their own existence
(or at least into maintaining the existence of some
sentients with Volitional Morality as the goal
system). So it seems that the Survival Imperative has
to be a sub-goal. The FAI has to maintain it's
existence or else replicate itself.

We can see this with altruistic real world humans. If
all they did was wonder the streets trying to 'help
others' they would (a) Soon run out of money, and (b)
Soon keel over from exhaustion, exposure and lack of
food and drink. Even altruistic humans have to divert
a sizable fraction of their 'computational resources'
into maintaining their own existence.

Here's the problem: Maximization of the goal 'help
others' requires that the majority of computational
resources be diverted to altruistic goals (In a
competition between altruistic and it's own survival
the FAI has to choose altrusim). But maximization of
the goal 'help others' requires the continued
existence of the goal system. But this requires that
the majority of computational resources be diverted to
the 'Survival' Imperative -either replication of
self-protection. We have a logical contradication!
This is not a matter of sub-goal stomp, this is a
plain flat logical contradiction. It proves that
Volitional Morality cannot, in fact, be the super
goal.

I hope I've persauded you that altruism cannot be the
central purpose of existence (although it may be a
PART of the purpose). So what do I think is a better
candidate for Universal Morality?

Well I'm much more in agreement with Ben. I think
Universal Morality has got much more to do with what
he calls 'Joyous Growth'. I the see the universe as
'a work in progress' which has the 'goal' of exploring
its own nature. Sentients are the 'agents' through
which the universe comes to know itself. There are 3
major themese pertinent to this:

(1) Self-Creation
(2) Self-Exploration and
(3) Self-Betterment

So for Universal Morality I would include: the drive
to create things of value (self-creation), the drive
for knowledge (self exploration) and the drive for
self-betterment (perfectionist ethics). So yes, I
think 'Joyous Growth' would be a good description.

I would also include Altruism and the Survival
Imperative. There is little doubt that Altruism is at
least a PART of what it means to be moral. As for the
Survival imperative, I gave arguments for including
that as part of the super-goal. In order to explore
its own nature, the first requirement of the universe
is that it maintain it's own existence. So part of
the purpose of the universe is to continue to exist!
And that's true for Sentient beings as well. I call
this 'Immortalist Morality' - sentients seek to
maintain their own existence. Of course the quest for
immortality has to be balanced against the other goals
I mentioned. To sum up I see Universal Morality as
consisting of 5 major goals, given roughly equal
priority:

(1) The drive for self-betterment
(2) The quest for knowledge
(3) The drive to create things of value and beauty
(4) The quest for immortality
(5) Altruism

That's my guess. You can see that I think that
'Universal Morality' is a pretty complex beast.

It gets more complicated! ;) I think that some input
from 'Personal Values' is required. As I said, I
don't regard a 'Non Observer Centered FAI' as stable.
The reason, as I explained, is that the major goals I
postulated as Universal Morality (1-5 above) don't
make sense without a 'personal values' input.

For instance the Universal Imperative 'Continue to
exist' (the quest for immortality) is 'Non Observer
Cenetered' in the GENERAL sense that the imperative
holds for everyone. But it requries observer centered
input in the sense of the SPECIFIC steps needed for
each sentient to fulfill it.

All sentient minds are represented as interaction
between 'Joyous Growth' with personal values. 'Joyous
Growth' is the GENERAL Universal Imperative, but the
personal goals are the many different possible
SPECIFIC expressions of the general case.

That's why I said that:

UNIVERSAL MORALITY x PERSONAL MORALITY = MIND

Here's a quantum analogy:

UNIVERSAL MORALITY is equivalent to the 'wave
function' of Friendliness. A friendly PERSONAL
MORALITY is equivalent to a 'particle' of
Friendliness.

All real world FAI's have to be observer centered and
are like unto a 'particle' of Friendliness, just as
all real world objects are only directly observable in
specific physical states. Any FAI is a SPECIFIC
expression of the GENERAL case of Friendliness. Why
do I think that all FAI's have to be observer
centered? Because all moralities can only be
instantiated in specific individual sentients, and
each sentient exists at unique space-time
co-ordinates. An input which is a function of these
unquie co-ordinates is required, and represents the
personal morality of the FAI.

I defined 'a morality' as 'a goal system'. But the
universe as a whole has a goal system: 'the laws of
physics'. So you could say that 'the laws of physics'
represent Universal Morality (the morality of the
Universe). Any particualr physical system has unique
parameters. These are the 'boundary coniditions' or
'specific input' needed in order to make the laws of
physics predict things in the directly observable
world. Any particular physical system could be said
to a 'Personal Morality' (a goal systen) presented by
these physical paramters which define it's state. So
the universe as a whole is a kind of 'mind', and so is
every part of it (panpyschism).

The point is easier to grasp when you view all of
reality as one giant computer. Think of reality as a
giant 'virtual reality' computation, like 'The
Matrix'. Then realise that a 'computation' is a 'goal
system' which represents a meaning. For instance, a
personal computer working out a tax return creates a
meaning - the computation is related to the meaning
associated with tax returns in this example.

Now... take the whole computation which represents the
history of the entire universe. This computation is
the meaning of life! Remember... the computation IS
the universe. The computation is identical to the
laws of physics, and the laws of physics are the
morality of the Universe (Universal Morality).

So, the MEANING of existence IS existence (You need to
think about this statement very carefully). The map
IS the territory.

UNIVERSAL MORALITY x PERSONAL MORALITY = MIND

and

MIND=REALITY

=====
Please visit my web-site at: http://www.prometheuscrack.com

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT