From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Wed Jan 23 2002 - 15:17:38 MST
Sorry for all the delays in replying. However, it's now getting to the
point where I need to make time, because this is not something that can be
The main question I faced, thinking about writing these replies, is where
to start. I eventually concluded that I should start at the beginning, on
the theory that whether or not Michael Anissimov already knows all this
stuff, someone else may not.
The title of this message is "ethical basics". I mentally use separate
terms for "ethics" and "morality" to refer to means and ends
respectively. That is, morality is what you normatively want, and ethics
is how you normatively get it. Both should be distinguished from "PR",
which is the art of automatically inserting the word "normatively" into
the logically straightforward phrase "morality is what you want and ethics
is how you get it" in order to prevent nitwit journalists from quoting you
out of context.
Anissimov's position, if I understand it correctly, is that ethics are
unnecessary, and may be dispensed with, in the pursuit of morality; that
there is no moral reason to be ethical. He seems to think - although it
is not stated explicitly - that there never was any good reason for such
ethics; that it is simply an impediment along the path of life which
people keep because they are not yet sufficiently self-aware to get rid of
Now, it is certainly possible to spend too much time praising ethics, for
the wrong reasons, without rational discipline. Whenever somebody says
something in favor of an ethics or morality that is commonly accepted in
the immediate group, we should immediately be skeptical as to whether the
statement is genuine rational thinking or simply political campaigning -
"Vote for me, I'm so damn righteous" - whether as a conscious strategy or,
more likely, as a genetic adaptation subtly causing the person to favor
thinking about, and publicly talking about, those conclusions that turn
out to be in favor of social or ethical norms.
Nonetheless, there are, in fact, some legitimate moral reasons to be
ethical, and perhaps more importantly, some very important reasons to
*distrust* your instincts if they start telling you "Ditch the ethics;
this cause is more important than that." Not only are your genes set up
to promote personal reproduction by *hijacking* any consciously held
altruistic cause, but those genes are also optimized for a society of
maybe 200 illiterate hunter-gatherers, not a world with six billion
Defeating evolution is a lifelong job, not a one-time decision. Deciding
to stay a virgin or get a vasectomy or whatever isn't going to cause the
genes to say, "Oh well, guess you're not going to reproduce, we give up."
The genome doesn't work that way. Individual organisms are
adaptation-executers, not fitness-maximizers. Defeating evolution with
respect to the adaptation for sex-having or child-bearing isn't going to
switch off all the other adaptations, even if it defeats the purpose of
all those adaptations.
Certain classes of altruistic efforts tend to go wrong for clear,
understandable, and recurrent reasons. So, if you want to be an altruist,
you study history and evolutionary psychology. You look at the
differences between the American Revolution and the French Revolution and
the Soviet Revolution and the civil-rights revolution. You learn to
distrust your own instincts because your genes are always working at
cross-purposes to your chosen path. To be a Singularitarian you have to
have enough self-awareness to actually work *for the Singularity* and not
let your genes just use the Singularity meme as a path for their own
purposes. This is not a minor issue. This is not a remote possibility.
This is, historically, what happens to altruistic groups as the *default
scenario* unless care is exercised.
Anissimov, as I understand it, seems to think of ethics as a simple burden
to be dispensed with. If this were the case, there would be no human
instinct for acting honorably. It's not a trustworthy instinct, but it's
there, and that means that under some circumstances in the ancestral
environment, acting "honorably" in the face of apparent short-term
inconvenience must have contributed to reproductive success in the long
The basis of the evolution of honorableness instincts and even altruistic
instincts is game theory, and specifically the iterated Prisoner's
Dilemna. Good books to read are Douglas Hofstadter's "Metamagical
Themas", which contains a discussion of the game theory of altruism in a
couple of the chapters (and the rest of the book is also a great deal of
fun); "The Moral Animal" by Robert Wright; "The Origins of Virtue" by Matt
There are also a few words on evolutionary psychology to be found in:
Briefly: If you play nice with other people, they'll play nice with you.
Repeat for seven million years. Now the evolved mind contains a whole set
of instincts for finding cooperators, detecting cheaters, and punishing
defectors. Then expand the pool of people you interact with from a
200-person illiterate hunter-gatherer tribe to a world of 6 billion
people. Now you *really* don't want to be a defector. Neither do you
want to begin invoking the chunk of brainware that whispers "The ends
justifies the means", in front of a literate (timebinding) population,
right after World War II. In the post-WWII social environment, anyone who
says that the end justifies the means is automatically and instantly
labeled as a defector, a bad guy, a "black hat", by the worldwide pool of
good guys and approximate good guys; that is the internal agreement that
the world's current supply of good guys have reached among themselves.
Next point: When I spoke of transhumanism's origin in "scientifically
literate aggressive rationalists", Michael Anissimov responded by saying:
> Look forward to Singularitarian ideas reaching far, far beyond the "core
> audience of scientifically literate aggressive rationalists" in the very near
> future. Instead of insisting that the meme be stapled down to its original
> tiny group, (however rational and intelligent they may be) perhaps we should
> be considering what variants would have the best net result in conditions of
> imminent mass propagation.
As for trying to propagate the meme outwards, I've been trying to do that
for six years! And I've made some progress, although not as much progress
as I would be making if memetics, rather than AI, were my full-time job.
But I've been doing it very very carefully, because again, historically,
this is a very very dangerous thing to do! Projects have been destroyed
by going public the wrong way - not rarely, but *frequently*.
It is a mistake to think that giving up all your ethics automatically
leads to success. Usually, giving up all the ethics turns out to mean
giving up all the morality as well, or delaying it until some indefinite
future date. The memes that are optimized for maximum propagation are NOT
THE SAME MEMES OPTIMIZED FOR ACTUAL ACHIEVEMENT OF THE SINGULARITY. The
memes that propagate best are pretty much *useless* for achieving the
Anissimov, you do not NEED to attach false promises or
mental-flaw-exploits to the Singularity concept in order to get it
accepted. The legitimate and entirely truthful offer of immortality,
freedom, and unlimited personal growth is ENOUGH. What we need is to make
sure that this wonderfully attractive package doesn't get hijacked. And
that means making very clear that the Singularity is not destiny and it is
not something that someone else does for you; it is something that comes
about only if you put active efforts into it. I know of no way to "cheat"
that does not destroy this principle as well. I have seen a few mutations
of the Singularity meme here and there, and I hope like hell that they all
die out, because there isn't a single one of them that could help actually
create the Singularity.
> It could be argued that even in creating distinct labels for those with
> different levels of future shock is perpetuating a group polarization
> mentality. Keep in mind that a successful memeticist (but not a cognitive
> scientist) will use flaws in the human psychology to ver advantage,
> oftentimes even projecting these flaws outwards from verself for the purpose
> of harmonizing with larger portions of society.
And you think that this doesn't involve any risk?!
What disturbs me almost as much as your proposal is that you're proposing
to *start out* with this as your guiding principle. If you start out with
an ethical compromise of that magnitude, where are you going to end up
at? Why would someone else expect you to have any morals left at all,
when you're done?
You're just beginning your career as a writer. Even if you don't believe
in the absoluteness of ethics, right now you should be accepting those
ethics as absolute restraints in any case and learn to write within them,
for the same reason that programmers should not start out by sprinkling
"gotos" through their programs just because "structured programming is too
hard". When you're starting out in *any* skill is exactly the wrong time
to get sloppy, because you pick up bad habits that take a long time to
unlearn. When you begin, at anything, you should be a perfectionist.
I've been arguing ethically my whole career, and I don't *need* to resort
to unethical methods of argumentation, and I have no particular
expectation that I would lose to an "unethical" arguer in a fair fight.
Don't take the easy way out, and the decision will pay off.
As it is, believe me when I say that, ethical violation or no, you are
definitely messing up the memetic side of this. It is quite possible that
reporters will spend the next five years quoting you as definitive proof
that all Singularitarians are ends-justifies-means scary fanatics. Don't
think that reporters aren't that evil; they are. Don't think that they
don't know how to use a search engine; they do. Don't think that because
you think a webpage is, in your mind, only for the "inner circle", that
that's how it'll be used; reporters will read it and they will quote it.
Everything is out in the open, which is indeed as it should be, but it
also means that you've just screwed up.
Now, I screwed up PR issues too when I started writing. I was still
screwing up even after I had three years of experience. For all I know
I'm still screwing up now. The fact that I never made excuses for those
errors and instead just shut up and corrected them means that I learned
quickly, but some of those ancient writings are still out there and still
being used against me. The Internet is like that. All errors are
But at least I screwed up for reasons that the audience would have
respected if they'd ever had the chance to learn the whole story. If you
propose being "pragmatic" and abandoning all ethics in order to promote a
debased form of the Singularity meme, what other response could you
possibly expect from existing Singularitarians? What sympathy could you
expect from the audience even if given the best possible opportunity to
defend yourself? If that's the supposed PR advantage of "pragmatic
memetics" in action, I'll stick with "the right conclusion for the right
reasons", thank you very much. Even if, somehow, I were to come to agree
with you that "pragmatic memetics" actually represents an advantage to the
Singularity, right now I'd tell you to spend the next three years
imitating Gandhi, because whatever "pragmatic" ethical compromise you try
right now is solidly *guaranteed* to spin out of your control.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT