Ben's "Extropian Creed"

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Nov 13 2000 - 20:08:48 MST


This article is in reference to Ben Goertzel's FAZ article about Extropianism,
"The Extropian Creed", which may be found online and in English at:

http://www.goertzel.org/benzine/extropians.htm

> This group [...] wants to push ahead with every kind of technology as
> fast as possible – the Internet, body modification, human-computer
> synthesis, nanotechnology, genetic modification, cryogenics, you
> name it. Along the way they want to get rid of governments, moral
> strictures, and eventually humanity itself, remaking the world as a
> hypereconomic virtual reality system in which money and
> technology control everything. Their utopian vision is sketchy but
> fabulous: a kind of Neuromancer-ish, Social-Darwinist
> Silicon-Valley-to-the-n’th-degree of the collective soul.

Actually, it's arguable whether we all like all of the technologies. For
example, I feel that genetic modification has too slow a turnaround time to
have any appreciable effect on our future, and I have misgivings about
nanotechnology; some other Extropians dislike AI. My main objection, however,
is to the part about Social Darwinism, which I feel is neither intrinsically
Extropian nor even intrinsically Libertarian. Social Darwinism certainly runs
diametrically counter to my own philosophy.

What Ben means specifically by "Social Darwinism" appears in the following
passages, which everyone else on SL4 should bear in mind are taken TOTALLY OUT
OF CONTEXT - the rest of the writing is a fairminded intro to Extropy, with
Social Darwinism being the one particular note that Ben Goertzel sees as
disturbing.

> But there was a painful contradiction lurking here, not far beneath
> the surface. And this personal contradiction, I believe, cuts right at
> the heart of Extropian philosophy. The libertarian strain in Sasha’s
> thinking was highly pronounced: Once he told me, tongue only
> halfway in cheek, that he thought air should be metered out for a
> price, and that those who didn’t have the money to pay for their air
> should be left to suffocate! I later learned this was a variation on a
> standard libertarian argument, often repeated by Max More, to the
> effect that the reason the air was polluted was that nobody owned it
> – ergo, air, like everything else, should be private property.

[MUCH LATER:]

> In the back of my mind is a
> vision of a far-future hyper-technological Holocaust, in which
> cyborg despots dispense air at fifty dollars per cubic meter, citing
> turn-of-the-millenium Extropian writings to the effect that humans
> are going to go obsolete anyway, so it doesn’t make much
> difference whether we kill them off now or not. And so, I think
> Extropians should be read, because they’ve thought about some
> aspects of our future more thoroughly than just about anyone else.
> But I also think that the key idea that makes their group unique --
> the alliance of transhuman technology with simplistic,
> uncompassionate libertarian philosophy – must be opposed with
> great vigor.

> Many of the freedoms the Extropians seek – the legal freedom to
> make and take smart drugs, to modify the body and the genome
> with advanced technology – will probably come soon (though not
> soon enough for me, or them). But I hope that these freedoms will
> not come along with a cavalier disregard for those living in less
> fortunate economic conditions, who may not be able to afford the
> latest in phosphorescent terabit cranial jacks or
> quantum-computing-powered virtual reality doodaddles, or even an
> adequately nutritional diet for their children. I believe that we
> humans, for all our greed and weakness, have a compassionate
> core, and I hope and expect that this aspect of our humanity will
> carry over into the digital age – even into the transhuman age,
> outliving the human body in its present form. I love the human
> warmth and teeming mental diversity of important thinkers like Max
> More, Hans Moravec, Eliezer Yudkowsky and Sasha Chislenko,
> and great thinkers like Nietzsche – and I hope and expect that these
> qualities will outlast the more simplistic, ambiguity-fearing aspects of
> their philosophies. Well aware of the typically human
> contradictoriness that this entails, I’m looking forward to the
> development of a cyberphilosophy beyond Extropianism -- a
> humanist transhumanism.
>
> [ARTICLE ENDS]

==

The fundamental argument of Libertarianism is not against charity; rather, the
argument of Libertarianism is against government-compelled charity. There is
nothing immoral about a privately owned charity; rather, the Libertarian
objection is to charity funded by compulsorily collected taxation. Some
Libertarians say that charity at gunpoint is immoral. Others simply believe
that just about everything is less efficient when the government operates it -
that the money that is taxed away for forced charity is spent inefficiently,
debilitates the economy, and often does so much direct damage that the poor
would be better off if the government simply set fire to the money. There may
be Libertarians who don't give a damn about the poor, but not many, and
probably no higher a proportion than among the general populace.

I am a Libertarian because I have a heuristic - a heuristic derived from
examination of history - which says that involving the government makes
matters even worse. I object to government-compelled and
government-administrated charity on the grounds that it just doesn't work. I
do not object to private charities. I do *not* believe in any form of Social
Darwinism. As far as I'm concerned, every human being, rich or poor,
constitutes one six-billionth of the total moral value of the Solar System.

I do not believe there is any action you can take, however evil or stupid,
which can cause you to "deserve" pain or death or poverty or even stupidity.
Nonconsensual pain is a moral wrong, a negatively valued goal, and nothing can
ever flip that sign to a positive. At this particular moment in time - as
opposed to after the Singularity - we have no better way to protect the
integrity of society than with forms of retaliation such as imprisonment or
shooting a mugger. Taking evil actions makes someone more "targetable" for
ills, so that if *someone* has to be hurt, we should prefer that the mugger be
hurt, than that his victim be hurt. That doesn't make the hurt a good thing.

I don't believe that, if you're poor, that makes ills that occur to you
desirable. I don't believe that being poor makes you "targetable" for
anything. I don't believe that it is morally better in any way if bad things
happen to poor people instead of rich people. I don't believe that society is
healthier when bad things happen to poor people. I don't believe that poor
people deserve poverty. I am not a Social Darwinist of any kind.

I also don't believe that the poor will be left out of the Singularity. I
think that the successful seed AI will blaze straight to strong transhumanity
and superintelligence and invent advanced nanotechnology, at which point the
cost of personal transcendence falls to zero. I think that who gets to
participate in the Singularity will be determined by the answer of each
individual human to the question: "Hi! Do you want to participate in the
Singularity?" You say "Yes", you get uploaded, you run on your six-billionth
of the mass of the Solar System; whoever you were before the Singularity, you
are now a Citizen of Terran Space, coequal with any other Citizen.

One of the major reasons I am *in* the Singularity biz is to wipe out
nonconsensual poverty.

Ben Goertzel, does that satisfy your call for a humanist transhumanism?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT