Re: One for the history books

From: Simon McClenahan (peepsplat@yahoo.com)
Date: Mon Aug 27 2001 - 13:51:54 MDT


From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

> I have made the decision to go for one hundred point zero zero percent
> altruism and complete rationality. The decision is not synonymous with
> the achievement, but is a necessary precondition of that achievement. To
> make a deliberate compromise is to halt your progress at that point,
> because even the best you can achieve and the most ambitious form of
> sanity you can imagine is only another step on a long road where only the
> next single step is visible at any given time. There are a variety of
> emotions that could act as sources of mental energy for my Singularity
> work, the desire for fame among them. The excuse is there if I choose to
> use it. I do not so choose. I choose to relinquish those emotions rather
> than compromise rationality.

You may choose, have the will, or the volition, to be absolutely altruistic
and rational, but so long as you have a conciousness and a sub-conciousness,
your concious self will still have less than total control over your whole
self. Your neural pathways are hard-wired to make emotionally irrational
signals, and if you did somehow manage to re-train or re-wire to achieve
true rationality, surely your self would cease to be human? Would a
transhuman be free of emotions? Would a Sysop think it is in our best
interests to be 100% altruistic and rational?

Despite a lot of nay-saying from most Western Culture and most of those
people who conciously choose to be 100% logical and rational, it is quite
possible to "see" and affect your sub-concious both actively and passively.
Actively through self-hypnosis, meditation, and other techniques. Passively
by subliminial messages, positive reinforcement ("I think I can"), hanging
out with a certain crowd, memes, etc. Sub-concious behaviour can also be
externally read as well through body language, hypnosis, etc., and of course
affected by subliminal advertising, persuasion techniques, etc.

I guess I'm nit-picking because Eli's choice is to be as close to 100% as
possible, but since no human can be at 100%, the measurement of altruism and
rationality becomes a relative one. Eli may achieve 95%, and I may achieve
91%, other SL4's may be in the same range and SL1's may achieve up to 20%.
Eli chooses not to use a resource of "fame" to reach 95% because it somehow
compromises all the other resources he uses to reach Rational Nirvana.

> I made that choice at around age fifteen or sixteen, shortly after I
> became aware of evolutionary psychology and the likelihood that the
> emotions designed to underly altruism would not be consistent with the
> declared goals of altruism. Because emotions like the desire for fame are
> fairly clear-cut - are "exceptional conditions" within the event-loop of
> the mind - it is possible to learn to identify the emotion's subjective
> feel, notice it, and disbelieve the mental imagery that causes it. I've
> since moved on to more interesting areas of mental cleanup. As far as
> things like the desire for fame go, I am finished.

My question is, why? Why not use fame in a positive manner rather than a
negative one? Me, I would have no problem with being a poster-boy for my
beliefs and special interests. I would revel in it and use the power of fame
to spread the good word with as much power as any other revolutionary in
history. I think the signup message when joining this list says something
about discouraging people from being humble. What could be more opposite of
humility than fame?

What are the declared goals of altruism anyway? (I'm guessing you've already
written a tome on this subject, so the executive summary would be
appreciated)

> (I am still validating my desire for routine social respect, which is
> quite a different thing from a desire for general fame - although I am
> recently starting to believe that this emotion may be "considered harmful"
> as well.)

All emotions are beneficial as well as harmful. Social respect and fame are
both external affectors. You can be affected negatively to become a victim,
or you can be affected positively to become a better person. Whether an
effect is positive or negative depends on how you receive the events. Did
the lovable/loving robot in A.I. get depressed when he could not find a way
to get love back from? No, he kept waiting, he maybe even died temporarily
(he could not know it would be "temporary" surely). He was resurrected and
was able to resume his "love person X" program at the end of the film. I
think if the story was to be continued, he would probably die or terminate
himself if he realized his purpose in life was gone (although why they
couldn't resurrect person X indefinitely doesn't make sense, but that's
Hollywood anthromorphism for you).

I know, Eli wrote a tome on The Meaning of Life too. If you devote your self
to the unreachable goal of 100% to X (altruism and rationality), you will
use up to 100% of your resources to achieve this unreachable goal and there
will be no resources left for other things. For being human. For desiring
social respect, or fame. Maybe if one desires fame amongst other things, it
allows you to keep yourself in check and not go overboard and become an
"unbalanced" individual. Desire for these things is not harmful, it's the
side effect of the result of achieving different levels of the goal that are
important.

> In short, I'm now confident enough about my mental cleanup that I can talk
> about it without making nervous little disclaimers such as "But, of
> course, who really knows what's inside their mind?"

Why "nervous little disclaimer"? It sounds like you're ashamed if you admit
to yourself or others that you actually don't know what goes on in your
mind. It really doesn't matter. What matters is how you use your mind, and
how you train it to become a better person.

  As far as the
> specific cognitive force "desire for fame" is concerned, I predict that my
> plans will exhibit no more sign of it than plans made by an AI. I will
> not make excuses in advance for failures because I am done cleaning up
> that emotion and I do not expect there to be any failures.

Sounds like you're in denial ;-)

>
> I realize that this is a claim for such an extraordinarily high level of
> ability that Bayesian reasoners, reasoning from the prior expected
> population levels of self-overestimators and extremely sane people, may
> find that such a claim (considered as an unadorned abstract) is more
> reason to doubt sanity than to believe it. That's probably the reason why
> a lot of people who are interested in the cognitive science of rationality
> manage to sound self-deprecating when talking about it; not just as a
> signal to others, I think, but as a signal to themselves, because they
> *know* that high confidence in sanity is often a signal of insanity. But
> that in itself is unsanity. It's saying, "I observe that I'm damned good
> at being sane, but I won't admit it even to myself, because if I sent
> myself the signal of confidence in sanity, I might have to interpret that
> signal as evidence of insanity."

Is this like trying to figure out if we live in real life or is our reality
just a simulation? This is a Strange Loop of reasoning is it not? Are you
claiming to have broken free?

  The main reason for me to be concerned about observed fameseeking
> is if I see it in someone who I think would like to be a rational
> altruist, and who's already gotten fairly far in cleaning up the mental
> landscape. Even so, fameseeking acts as an approximation to rational
> altruism under some circumstances, and I am thus willing to use this
> emotion as memetic shorthand - provided that the arguments are truthful,
> the flaws in the approximation to altruistic rationality are either unused
> or explicitly dealt with, and pure rationality is closer than under the
> previous status quo.

OK, so there is a use for fame! Why go out of your way to avoid it? With
this above knowledge, if one was to actively seek fame, it can be useful as
another resource to achieving your goals (whatever they are). If you can use
it, use it. Don't waste resources. Ever! That's my philosophy at least.

> On the other hand, it's acceptable to say: "You are one of only six
> billion entities, in a galaxy of four hundred billion stars and decillions
> of sentient beings, whose lives predate the Singularity; you are one of
> the oldest of all living things. I don't know whether you or anyone else
> will respect that in the future, because it's difficult to predict what
> transhumans will care about, but it does say something about how we should
> feel *now*. We are not just the six billion people who were around before
> the Singularity; we are the six billion people who created the
> Singularity. An incomprehensibly huge future rests on the consequences of
> our present-day actions, and 'the impact we have on the universe' is
> essentially 'the impact we have on the Singularity'."

This sounds like a backup plan for what happens if effects from the
Singularity comes close to destroying us all (obviously because we didn't
implement SingInst's Friendly AI strategy). We could hopefully repopulate
with original Humans, while of course keeping the unfriendly
transhumans/sysops/computers at bay. That must be what it feels like to be a
neo-luddite ;-)

Well, this email took longer than expected. Back to the daily grind, where
I'm not famous and I am occasionally emotionally irrational, even though I
do not conciously desire it to be that way.

cheers,
    Simon

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT