Re: The Future of Human Evolution

From: Randall Randall (randall@randallsquared.com)
Date: Tue Sep 28 2004 - 08:23:29 MDT


On Sep 28, 2004, at 8:41 AM, Eliezer Yudkowsky wrote:
> Randall Randall wrote:
>> Even if
>> molecular manufacturing was certain to decimate 99% of the human
>> race, it would be better than an SAI with a 99% chance of being
>> benign. However, if FTL is possible, we may have to face that
>> choice anyway.
>
> Unfortunately, this isn't an either-or proposition. Building a
> nanofactory does not prevent anyone else from creating an AI. Quite
> the opposite.

Nor was I (or anyone that I can see) suggesting otherwise. However,
which one comes first *is* a binary choice, as you acknowledge below.

> My preference for solving the AI problem as quickly as possible has
> nothing to do with the relative danger of AI and nanotech. It's about
> the optimal ordering of AI and nanotech.

I agree that the debate is about the relative order of
working examples, but I think that the relative dangers
involved are quite relevant. In particular, while newly
built nanofactories will certainly allow a brute forcing
of the AI problem at some point, it seems clear that
building vehicles sufficient to leave the vicinity will
be effective immediately (essentially), as that problem
is well understood, unlike the AI problem. In any case,
it seems like a simple grid choice to me, where one axis
is limit on travel (FTL or STL), and the other which
technology comes to fruition first (MNT or AI). In an
FTL world, FAI is the only apparent hope of surviving the
advent of AI. In an STL world, however, MNT can be a
sufficient technology for surviving unFriendly AI, for
some. Since we appear to live in an STL world, I prefer
MNT first.

--
Randall Randall <randall@randallsquared.com>
"If you do not work on an important problem,
it's unlikely you'll do important work." -- Richard Hamming


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT