Risk, Reward, and Human Enhancement

From: Byrne Hobart (sometimesfunnyalwaysright@gmail.com)
Date: Wed Dec 05 2007 - 09:54:55 MST


As we get better at directly manipulating human abilities, we're probably
going to encounter situations in which a treatment has uncertain effects.
Consider a new intelligence enhancement drug that, in clinical trials, has
been shown to reduce IQ by 5 points 90% of the time, and raise it by 10
points 10% of the time (and can be repeated indefinitely). For an
individual, this is a pretty bad deal -- but get a group of 10,000 devoted
singularitarians, have each one take the treatment, and then repeat it for
the ones who get enhanced, and you'll end up with one person with an IQ 50
points higher. And one ridiculously smart individual may make enough of a
contribution to outweigh making 9,000 willing volunteers marginally dumber.

I'm curious about any thoughts others might have about this. I've heard a
lot of rhetoric about sacrificing spare money and spare time for FAI, and as
far as I know there isn't a major difference between 1) sacrificing a lot of
time advancing something that may not succeed within one's lifetime, if at
all, and 2) sacrificing a risk-adjusted 3.5 IQ points in order to ensure
that someone out there will be really, really intelligent.

Has this kind of concern been addressed before? Is it still a valid plan if
the equation works out in such a way that there's a high probability that *
nobody* will be enhanced, even if other people are harmed (e.g. if it was a
90% chance of a 50 point drop, and a 10% chance of a 100 point boost, and
you only have four volunteers, do you proceed?).

My best guess is that the Kelly Criterion will come into play, but IQ isn't
fungible; the marginal utility seems to increase.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT