More musings: Hologram analogy and 'The Fundamental Theorem of Morality' again

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu May 20 2004 - 23:30:02 MDT


Eli’s recent posts sparked some additional ideas in my
brain yesterday. I have my own theory of
Friendliness, and yesterday the final pieces came
together.

What follows is a quick summary of these latest
musings

-------------------------------------------------------

‘Take the visual image of something as an analogy for
a thing. Then regard a hologram of that image as an
analogy for the mental representation (concept). Now
a hologram has the property that cutting the picture
in half does not result in half an image. The full
image is still there, only the resolution is not as
sharp. You could go on cutting up a hologram and
still see the full image in each fragment, only the
resolution would be becoming less and less sharp. So
we could consider the 'resolution' of the hologram as
analogous to a measure of how well a mind is
understanding a given concept, and the many possible
'fragments' of the original hologram as analogous to
many different individual viewpoints of the concept
between people with varying levels of intelligence
(everyone sees the same thing, but they all see it
slightly differently).

If it's a valid analogy then you can see that even a
sentient of very low intelligence could still obtain
some kind of 'understanding' about any concept, no
matter how complex or abstract. It is simply the case
that the low IQ sentient sees the concept in 'low
resolution', like a very low-res holographic image.
The sentient can still dimly make out the broad
outlines of the image. Now. Let’s imagine 'turning up
the IQ' of the sentient. This would be equivalent to
increasing the resolution of the hologram, so the
image becomes sharper and sharper.

Let's take the concept of 'Friendliness'. Consider an
FAI with post-human super intelligence. Can we puny
humans say anything about the AI's morality? If the
hologram analogy is valid, we can. We may not be
entirely sure what 'good' and 'evil' is, but we can
kind of recognize it when we see it. That is, we
could be said to have access to a very low-res
hologram of 'good' and ‘evil’: the whole abstract
'form' is available to us; we just don't see it
clearly. So let's now consider our rapidly
self-improving AI. As it grows smarter and smarter,
this would be equivalent to continuously increasing
the resolution of the 'morality holograms'. What is
happening is equivalent to an 'optimization' of
morality. The super-intelligent post-human FAI will
have a 'morality' hologram of enormously increased
resolution. But there is still a common frame of
reference with us puny humans. The 'morality'
hologram is still accessible to us in its entirety
(albeit in the much degraded low-res form).

Now, if I may mention my 'Fundamental Theorem of
Morality' again, you'll see how these ideas tie in to
it.

You may recall I described my theorem by the formula:

Universal Morality x Personal Morality = Mind

[You can read a sketch outline of my theorem here:
http://www.sl4.org/bin/wiki.pl?FundamentalTheoremofMorality
]

Taking the hologram analogy, the term 'Universal
Morality' is equivalent to the complete image
representing the moral concept 'Goodness'. 'Personal
Morality' is equivalent to another image representing
arbitrary goals determined by a specific physical
instantiation of a mind (for instance in humans these
would be biological constraints and goals established
by evolution). The 'x' (multiplication sign)
represents the interaction between the two images. We
could superimpose the two images to produce a
hologram: the result 'Mind' (the mind or goal system
of a sentient) is equivalent to a somewhat lower
resolution image of 'Goodness' mixed with arbitrary
elements. We want to 'optimize' the 'Mind' to produce
increasingly 'higher resolution' representations of
'Goodness' (Universal Morality). The FAI problem
amounts to knowing how to optimize ‘Personal Morality'
so as to enable 'Universal Morality' to shine through.

The most important thing to note about my theorem is
that it says that for all sentient minds THE WHOLE OF
UNIVERSAL MORALITY IS ALREADY THERE. That’s right:
The entire concept of 'Goodness' (Universal Morality)
is in some sense already present in all of our minds.
Recall my formula again:

Universal Morality x Personal Morality = Mind

There is only one Universal Morality. It is Personal
Morality that has many degrees of freedom, and
Personal Morality that defines the differences between
us. So for all minds, we start the specification with
the 'Universal Morality' (Complete Goodness) term.

We note that although Universal Morality is present in
all of us, it is 'filtered' by our 'Personal
Morality'. Our Personal Morality interferes with
Universal Morality, and as a result we can only access
a very 'low resolution' image of 'Goodness'. So
learning to be more moral is an 'optimization
process’: we need to start adjusting our Personal
Morality in order to let Universal Morality shine
through. And we'll do this through a combination of
self-discipline and changes to the biological
structure of or minds (making ourselves post-human).
As our mind/morality/goal system is optimized, we will
have access to higher and higher resolution 'images'
of Universal Morality (Complete Goodness).’

=====
"Live Free or Die, Death is not the Worst of Evils."
                                      - Gen. John Stark

"The Universe...or nothing!"
                                      -H.G.Wells

Please visit my web-sites.

Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org

http://personals.yahoo.com.au - Yahoo! Personals
New people, new possibilities. FREE for a limited time.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT