RE: One for the history books

From: Mitchell, Jerry (3337) (Jerry.Mitchell@esavio.com)
Date: Mon Aug 27 2001 - 14:32:35 MDT


> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
> > I have made the decision to go for one hundred point zero
> zero percent
> > altruism and complete rationality. The decision is not
> synonymous with
> > the achievement, but is a necessary precondition of that
> achievement. To
> > make a deliberate compromise is to halt your progress at that point,
> > because even the best you can achieve and the most ambitious form of
> > sanity you can imagine is only another step on a long road
> where only the
> > next single step is visible at any given time. There are a
> variety of
> > emotions that could act as sources of mental energy for my
> Singularity
> > work, the desire for fame among them. The excuse is there
> if I choose to
> > use it. I do not so choose. I choose to relinquish those
> emotions rather
> > than compromise rationality.
>
> You may choose, have the will, or the volition, to be
> absolutely altruistic
> and rational, but so long as you have a conciousness and a
> sub-conciousness,
> your concious self will still have less than total control
> over your whole
> self. Your neural pathways are hard-wired to make emotionally
> irrational
> signals, and if you did somehow manage to re-train or re-wire
> to achieve
> true rationality, surely your self would cease to be human? Would a
> transhuman be free of emotions? Would a Sysop think it is in our best
> interests to be 100% altruistic and rational?
>
> cheers,
> Simon

I haven't posted here before, I've enjoyed lurking and listening, mostly
because I didn't have a lot to add to the technical discussions. But this
one crosses over into an area I can at least comment on.

I don't have a degree in Objectivist philosophy, but from what I've read,
the concepts from that view are that if ones thoughts are rational, then one
understands that feelings are the RESULT of cognition, not the cause. This
doesn't mean that one shouldn't have emotions, but that they are not
primary. If one is logical, then ones thoughts and emotions don't conflict.
If you lose value, you are sad. If you gain value, then your happy (I'm not
talking about money per se as value here), a entities value system is
something it needs to discover, evolve, and adapt. Its going to be
impossible to program in this meta-behavior and morality if we haven't fully
worked out the details of it for use by humans yet. Example: I would say
that its impossible to use altruism as a leading goal and still be rational,
mainly because I believe altruism to be irrational. Trying to design a mind
to operate like that may not be possible at all if these are opposing
factors. An SI would be doomed to insanity, or quickly figure out that its
life is worth something and has value, then wonder why everyone else's life
merits more value then its. Best bet as I see it is to find the most
logically consistent philosophy to feed into it and just try to be as good a
friend to it as possible. Minds deal with thoughts, thoughts are concepts
framed in particular contexts, and the totality of these contexts make up a
entity's philosophy. Trying to program a particular thought out of context
either wont "hold" or will lead to instability in further thought
development.

Jerry



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT