Re: Risk, Reward, and Human Enhancement

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Dec 06 2007 - 12:07:42 MST


--- Byrne Hobart <sometimesfunnyalwaysright@gmail.com> wrote:

> > How do you determine whether the gain from making one person much
> > smarter outweighs the loss from making the rest of them marginally
> > dumber?
>
>
> My thinking was that FAI is likely to be the result of a collective
> effort, but that it's going to require at least one utterly brilliant
> thinker, and that the advantage to having *the* smartest person, rather than
> many people in the top 1%, would be high enough to justify a sacrifice.

The problem is that the less intelligent the masses are, the more likely they
are to ignore the genius.

> But
> it's part of a broader question: is the Singularity beneficial enough that
> we ought to accept a risk of massive harm to make it happen?

That's a different issue. The Singularity will certainly be beneficial to the
godlike intelligence that replaces humanity. Is human extinction "harm"?
Extinction is not massive death. We have massive death every day. Extinction
is a massive lack of birth.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT