Defining Right and Wrong

From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Sat Nov 23 2002 - 20:28:37 MST


Eleizer wrote to exptropians on Fri, 22 Nov 2002 21:09:11 -0500:

<quotation>
This doesn't really deal with the question of whether "right" and
"wrong" can really be said to "refer" to anything. "Truth", for
example, is not a physical property of a belief, yet turns out to
have a ready interpretation as the correspondence between a belief
and reality. Numbers are not physical substances, but I would not
regard 2 + 2 = 4 as subjective. Morality might turn out to have the
same kind of interpretation. For example, if we accept that, for
whatever reason, murder is "undesirable" - maybe undesirability is
somehow a physical substance inherent in the murder event, if that's
your demand for objectivity - then we would readily conclude the
undesirability of events *leading to* a murder event, even if no
physical substance of undesirability appears in those events.
Perhaps morality may not need to be a physical substance to begin
with.

Or we could always try to actually *create* some kind of objective
fact whose ontological status is such as to give our moral intuitions
something to refer to, but that might require Type IV godhood.

</quotation>

---
I am beginning to think Type IV godhood is not needed to ground Right
and Wrong in objective reality.  If our Friendly AI is going to be a
Bayesian reasoner, then ve merely has to select a theory defining
Right and Wrong, and test how closely the theory usefully describes
reality.  There is no reason why we cannot do the same right now.
There is a link between this desire for grounding and a desire
expressed in my previous post referring to freedom... the link being:
"What is it, exactly?"  So, at the risk of again laying out some more
'half-baked' ideas, I will attempt to make a first-pass at a
definition.
Eliezer's theories describe the initial 'cut' of a Friendly AI's
friendliness-definition-space, composed of a cloud of pan-human
characteristics, opinions and beliefs.  This appeared to me (in part)
to be a visualization of what would be needed to encode Right and
Wrong in software.  The definition also appeared so complex that I
suspected it resided on a level close to ground - and that a higher
level of abstraction was possible.  Such a level might facilitate
further discussion (*not* redefine Friendly AI).  There is a danger
here, of trying to oversimplify things, but I think it would be
useful to encapsulate some ideas in a 'short' definition in order to
further discussion with a wider audience.
I've put together the following higher level definition for an
absolute Right--Wrong continuum, hereafter referred to as Rightness.
*************
Rightness:
----------
i) Rightness is a continuum here defined as real number between zero
and one, where zero is 'the worst kind of Wrong' and one is 'the best
kind of Right'.
ii) The ability to assess the Rightness of an action increases with
intelligence.
iii) The ability to assess the Rightness of an action increases with
situational-knowledge.
iv) The knowable limits of Rightness vary with intelligence,
situational-knowledge and the range of actions possible.
v) The Rightness of an action changes in proportion to the effects of
the action in changing the complexity in the situation.
Therefore (fixed font):
AaR = Sk * I
Knowable limits of Rightness = [AaR*min(C), AaR*max(C)]
R =  AaR*chosen(C) + AaR*min(C)
    ------------------------------
      AaR*max(C) - AaR*min(C)
Where:
AaR = Ability to assess rightness (individual)
Sk = Situational knowledge (individual)
I = Intellegence (individual)
C = Complexity of Situation
min = minimum complexity delta function
max = maximum complexity delta function
chosen = chosen action delta function
R = Rightness (of action) in range [0,1]
***************
My definition of Rightness depends to a large extent on how I define
intelligence, situational-knowledge and complexity, so I here offer
further working definitions:
Intelligence:
-------------
Ability to achieve complex goals in complex environments. (Thanks to
Ben Goertzel)  Implied in this definition is that the greater the
intelligence, the greater the ability to determine how a given action
will change the complexity in a situation.
Situational-knowledge:
----------------------
If a sentience simply knows that there is a situation and nothing
more, then ver knowledge of that situation is zero.
Situational-knowledge can be bounded (at least) by space, time and
detail, therefore :
 Space  - the larger volume of space that
            our situational-knowledge covers,
            the higher the measure.
 Time   - the further back in time our
            situational-knowledge goes the
            higher the measure.
 Detail - the greater the detail of
            situational-knowledge, the higher
            the measure.
Knowing the position and speed of every elementary particle in the
universe since the dawn of time is the maximum situational-knowledge
(as far as I know ;).
Complexity:
------------
Complexity is a measure of:
 (Amount of Information) AND
 (Levels of content) AND
 (Variety of content) AND
 (Ability to increase complexity)
Examples:
Amount of information:  Zero bits is none, a 10^15 bits is a lot.
Levels of content:  A string of random ones and zeros is a single
level of content.  A modern multimedia CD has several levels
including: a string of ones and zeros, collections of eight bits
representing bytes, collections of bytes representing sound,
collections of sounds representing words, collections of words
representing sentences... you get the idea.  The more levels, the
more complexity.
Variety of content:  A CD with 20 occurrences of 'Da-Da-Da' on it has
less variety than a CD with 20 different tracks on it.
Ability to increase complexity:  A granite rock has zero ability to
increase complexity.  A living cell can reproduce, thus increasing
complexity.  A human can not only reproduce, but also think
creatively and invent things, increasing complexity much faster than
a cell.
************
Note 1:
-------
I am here acknowledging that Rightness is an aboslute, the limiting
values of which can only be known by a being having infinite
intelligence, and complete situational knowledge.
Note 2:
-------
For a given level of intelligence and situational-knowledge there is
a 'window' of values for Rightness that one can choose from, such
that one can know (approximately) what level of Rightness one has
chosen.  It is possible for a sentience to choose an action that is
outside of the window, but that choice cannot be known to be more (or
less) Right by that being at that time.  Therefore, although I posit
an absolute continuum of Rightness, an individual being will only
ever view a window onto that continuum.  There is nothing that can be
done about this, as beings have limitations, and hind-sight doesn't
count - ie. it doesn't help the you make a decision of greater
Rightness.
Note 3:
-------
Opinions (whether true, false or somewhere in between) are orthogonal
to the idea of Rightness.
Note 4:
-------
Rightness can only exist in the minds of sentients, and without a
sentience involved in an action, the idea becomes meaningless.  Eg:
It is meaningless to say "It is Right for the Moon to orbit the
Earth", but it is meaningful to say "When the Captain chose this
orbit, it was the Right thing to do".
***********
So, to return to my intention for creating the definition, does the
definition in fact approximate the pan-human cloud of
characteristics, opinions and beliefs that currently define Rightness
in the world today?  As to that, I can only make unfounded
assertions, as the 'homework' for gathering this information has yet
to be done... but I intuit that it might.  A better question might
be: Do any here find the definition useful?
As for my own 'opinions and beliefs'... this description seems to
come close to my introspection of how I assess Rightness.
Comments and criticisms are welcome.
Michael Roy Ames


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT