Re: 'The Libertarian-Transhumanist Philosophical Platform'

From: Eugen Leitl (eugen@leitl.org)
Date: Tue Aug 10 2004 - 03:53:57 MDT


On Tue, Aug 10, 2004 at 04:36:23AM -0400, Eliezer Yudkowsky wrote:

> Also, Geddes, kindly do not call it "Yudkowsky's arrow of morality" for I
> never said such a thing.

Speaking of which, kindly stop putting words in my mouth as well:
to wit: http://www.intelligence.org/yudkowsky/friendly.html

"Eugen Leitl believes that altruism is impossible, period, for a
superintelligence (SI), whether that superintelligence is derived from humans
or AIs. The last time we argued this, which was quite sometime ago, and thus
his views may be different now, he was arguing for the impossibility of
altruistic SI based on the belief that, one, "All minds necessarily seek to
survive as a subgoal, therefore this subgoal can stomp on a supergoal"; and
two, "In a Darwinian scenario, any mind that doesn't seek to survive will
die, therefore all minds will evolve an independent drive for survival." His
first argument is flawed on grounds that it's easy to construct mind models
in which subgoals do not stomp supergoals; in fact, it's easy to construct
mind models in which "subgoals" are only temporary empirical regularities, or
even, given sufficient computing power, mind models in which no elements
called "subgoals" exist. His second argument is flawed on two grounds. First,
Darwinian survival properties do not necessarily have a one-to-one
correspondence with cognitive motives, and if they did, the universal drive
would be reproduction, not survival; and second, post-Singularity scenarios
don't contain any room for Darwinian scenarios, let alone Darwinian scenarios
that are capable of wiping out every trace of intelligent morality.

Eugen essentially views evolutionary design as the strongest form of design,
much like John Smart, though possibly for different reasons, and thus he
discounts intelligence as a possible navigator in the distribution of future
minds. (I do wish to note that I may be misrepresenting Eugen here.) Eugen
and I have also discussed his ideas for a Singularity without AI. As I
recall, his ideas require the uploading of a substantial portion of the human
race, possibly even without their consent, and distributing these uploads
throughout the Solar System, before any of them are allowed to begin a hard
takeoff, except for a small steering committee, which is supposed to abstain
from any intelligence enhancement, because he doesn't trust uploads either. I
believe the practical feasibility, and likely the desirability, of this
scenario is zero."

I've never said the quotes you attribute to me in "", and you *do*
misprepresent several things we've talked about.

So kindly pull it from your site. Thanks. (Why did I need to find this
through Google of all things? Before you write stuff about people, and
publish it, you ought to notify said people to prevent reactions like this).

-- 
Eugen* Leitl leitl
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net




This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT