Re: Geddes's 'Moral Perturbation Theory'

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Mon Jun 21 2004 - 00:39:51 MDT


--- Eliezer Yudkowsky <sentience@pobox.com> wrote: >
>
> If you mean that asking me for my personal
> philosophy is likely to get you
> a *better* approximation of a CV than asking Britney
> Spears, then you are
> probably right; but *hopefully*, Marc, the CV will
> be wiser than us *both*.
> It'd be frustrating to do all that work, and then
> find out that I could
> have taken over the world and done as good a job.
> And boring, if the world
> were so dull and prosaic as my wildest imaginings.
>
> The initial dynamic itself runs on (takes a deep
> breath) one human, one vote.
>
> If an RPOP wanted a good first approximation for a
> sample, it'd pick a
> hundred humans most a-priori likely to be
> informative about the largest
> clusters in the set of final extrapolated volitions,
> and then extrapolate
> out their volitions from that starting point.
> Contrary to your intuitions,
> Marc, this means using 100 ordinary folks. And then
> you extrapolate those
> people knowing more, thinking faster, growing up
> farther together; which
> may or may not arrive at an interim point vaguely
> reminiscent of Eliezer
> Yudkowsky before the extrapolation moves on.
> Probably not. The
> circumstances that forged me are too unusual. The
> same would hold of those
> other geniuses that one might consider. By the time
> a majority of humanity
> zips past the Einstein milestone for raw
> intelligence, they may have grown
> in other ways that would render Einstein a pointless
> comparison. If you
> are someday as bright as Newton I do not think you
> will become an
> alchemist, and I do not think you will linger long
> at Newton's marker. I
> am not a symbol of an extrapolated Indian day
> laborer who knows more,
> thinks faster. I am myself. Just myself. One
> human, one vote.
>
> Look to the best, brightest, most altruistic humans,
> and you will find that
> they no longer come up with elaborate justifications
> for why they should be
> philosopher-kings. Even if it is cleverly
> disguised.
>
> --
> Eliezer S. Yudkowsky
> http://intelligence.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence
>

Well, instead of looking at individual humans, let's
focus on 'mental characteristics'. For instance your
volition consists of a mix of many different mental
characteristics, which I'll call eli1, eli2, eli3 etc.

So:

Eliezer's Volition = eli1 x eli2 x eli3 x eli4 x
eli4..

I'll call the characteristics of the Indian laborer,
ind1, ind2, ind3 etc.

So:

Indian's volition = ind1 x ind2 x ind3 x ind4 ...

Let's look at things from the perspective of the
quantum multiverse. Imagine a pair of 'Everret
Goggles' which enabled you to view a person's mental
characteristics across alternative branches of the
multiverse. There would be some variations in each
person's mental characteristics, but there would also
be common characteristics.

Looking at the characteristics making up Eliezer's
volition, I'd expect to find less variation across the
mulltiverse. Most of the alternative Elizer's are
still altruistic, believe in the multiverse, are
Libertarian leaning, believe in Bayesian reasoning etc
etc etc. On the other hand, looking at the volition
of the Indian laborer across the multiverse should
expose a much higher degree of variation. In some
branches the Indian believes in astrology, in other's
he's a believer in reading tea leaves and so on and so
on.

The point is that looking at Eliezer's volition should
yield a higher frequency of mental characteristics
that converge across the multiverse.

So I would expect there to be more 'coherence' in the
mental characteristics of a sample of say 500 of the
world's 'best and brightest' than a sample of 500
people picked at random.

Now I make the assumption that when different mental
characteristic are mixed, the result is not
chaotically related to the individual componants, but
it's in some sense more like multiplication.

For instance take one mental trait from Eliezer and
mix it with a mental trait from the Indian laborer
thusly;

ind1 x eli2 = ?

I presume that the result is something which is a
recognisably predictable transform of both traits, and
not something wildly different and novel.

It's reasoning like this that led me to formulate my
'Fundamental Theorem of Morality'. You may recall my
equation:

Universal Morality x Personal Morality = Mind

The Universal Morality (UM) are the mental traits
converging in your alternative selves across the
quantum multiverse. The Personal Morality (PM) are
your unique personal traits for which is no
convergence across your alternative selves.

So for instance, taking Eliezer's psyche, his
converging traits are the UM componant, his unique un
usal traits are the PM componant. For instance;

UM = eli2 x eli3 x eli6 ...
PM = eli1 x eli4 x eli5 ...

And then

Volition (Eliezer) = UM x PM

If we devide out the eccentric traits (Eliezer's PM
componant), we are left with the UM componant which
can be used as a template. The template could be used
to help specify the UM componant of other peple's
volition, which would then be transformed by a PM
componant unique to each individual.

For instance for the Indian laborer:

PM= ind1 x ind4 x ind5 ....

Using Eliezer's UM componant as a transform, we could
obtain an extrapolation of the Indian's volition

Volition (Extrapolated Indian) = UM x PM =
eli2 x eli3 x eli6 x ind1 x ind4 x ind5 ...

  

=====
"Live Free or Die, Death is not the Worst of Evils."
                                      - Gen. John Stark

"The Universe...or nothing!"
                                      -H.G.Wells

Please visit my web-sites.

Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org

http://personals.yahoo.com.au - Yahoo! Personals
New people, new possibilities. FREE for a limited time.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT