Re: Risk, Reward, and Human Enhancement

From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Thu Dec 13 2007 - 20:06:08 MST


On Dec 5, 2007 11:54 AM, Byrne Hobart <sometimesfunnyalwaysright@gmail.com>
wrote:

> As we get better at directly manipulating human abilities, we're probably
> going to encounter situations in which a treatment has uncertain effects.
> Consider a new intelligence enhancement drug that, in clinical trials, has
> been shown to reduce IQ by 5 points 90% of the time, and raise it by 10
> points 10% of the time (and can be repeated indefinitely). For an
> individual, this is a pretty bad deal -- but get a group of 10,000 devoted
> singularitarians, have each one take the treatment, and then repeat it for
> the ones who get enhanced, and you'll end up with one person with an IQ 50
> points higher. And one ridiculously smart individual may make enough of a
> contribution to outweigh making 9,000 willing volunteers marginally dumber.

I'd like to think I would volunteer, if it were the most cost-effective way
to help out. (I can't say whether it's probable that I would actually
volunteer, since (a) I'm not yet taking time to actually think it over since
it's hypothetical, and (b) the human brain has an uncanny knack for
rationalizing its way out of actually following through with making
sacrifices for strangers.)

> I'm curious about any thoughts others might have about this. I've heard a
> lot of rhetoric about sacrificing spare money and spare time for FAI, and as
> far as I know there isn't a major difference between 1) sacrificing a lot of
> time advancing something that may not succeed within one's lifetime, if at
> all, and 2) sacrificing a risk-adjusted 3.5 IQ points in order to ensure
> that someone out there will be really, really intelligent.

On a psychological or motivational level (rather than a utilitarian level),
there's a huge difference (especially for non-transhumanists).

-Rolf



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT