Re: Why playing it safe is the most dangerous thing

From: Olie Lamb (neomorphy@gmail.com)
Date: Tue Mar 14 2006 - 16:48:21 MST


Main Topic: Negative utilitarianism

(Apparently, this didn't get to SL4, as it was sent from a non-subscribed
adress. Silly me.)

>From: "Philip Goetz" <philgoetz@gmail.com>
>Subject: Re: Why playing it safe is the most dangerous thing
>Date: Fri, 24 Feb 2006 10:26:58 -0500
>...
>We could add the notion of negative utility. "Negative utility" is my
>explanation for why lotteries are so popular in poor communities,
>despite the fact that the expected ROI of a lottery ticket is < 1;
>
>Suppose, contemplating whether to buy a lottery ticket, a person sums
>up the expected utility of their entire future life without buying the
>lottery ticket, and concludes it is below the "zero utility level"
>below which they would be better off dead. They then consider the
>expected utility on buying the lottery ticket. This gives them two
>possible outcomes: one of very high probability, and a slightly lower
>negative utilty; one of small probability, with positive utilty.

Yes, this is a reasonably common extention of utilitarianism. From my
experience, a fair portion of undergraduate normative ethics classes will
toss over a couple of negative-utilitarian models.

One problem with applying your variety of negative-utility funtion outlook
is that it relies on the starting outlook that future life=shit. This
outlook is also known as "extreme pessimism".

Your call for such a pessimistic outlook puts the onus of proof heavily upon
you. I don't think that your generalisations about the nature of "jerks in
power" constitutes much of an argument for believing that caution leads
probability of negative utility of "near 1". You can't get much more
extreme pessimism than that.

I don't think that it's possible to be confident about the positive or
negative outcomes of actions that support the status quo. The only way to
have much faith in the course of the future is to have a have a hand in
shaping it.

== Negative utility systems don't work ==

The bigger problem is that some of the other implications of
negative-utility operations lead to a number of absurdities.

Firstly, your version: saying that any negative-utility-state can revert to
null, thanks to suicide, it implies that any risky activity can be
justified. ("Don't dance on the rail! You could fall and become
quadriplegic!" "That's OK, I could just kill myself if that happened!")

In fact, the only risky activities that need to be avoided are those with
minor bad consequences - check the math, it works out that way.

(more)

>Rather than combining these two, the person reasons that they can kill
>themselves any time they choose, and thus replaces each of the
>negative-utility outcomes with a zero "suicide utility". The
>low-probability positive outcome, averaged together with the
>high-probability suicide utility of zero, produces an average utility,
>which is higher than the suicide utility (zero) of their life without
>the lottery ticket.
>
>(Note that finding oneself with a losing lottery ticket doesn't then
>require one to commit suicide. One merely begins looking for other
>low-probability branches - future lottery tickets - leading towards
>positive utility.)
>
>More specifically, this negative utility theory says that, when
>comparing possible actions, you compare the expected utilities only of
>the portions of the probability distributions with positive utility.
>If you consider the probability distribution on future expected summed
>life utilities, and let
>
> - U0 be the positive area for the no-ticket distribution (the
>integral of utility over all outcomes under which utility is positive)
> - UT be the positive area for the bought-a-ticket distribution
>
>then UT > U0 => you should buy a ticket.
>
>We can apply similar logic to possible outcomes of the Singularity.
>If, as I've argued, the careful approach provides us with a near-1
>probability of negative utility, and the damn-the-torpedoes approach
>provides us with a greater-than-epsilon probability of positive
>utility, then we seem to be in a situation where the summed positive
>utility of damn-the-torpedos is greater than the summed positive
>utility of the cautious approach, EVEN if the expected utility of the
>cautious approach is greater.

Apart from the fact that this negative-utilitarian-model doesn't work, you
can't reasonably apply a suicide function in the case of UFAI.

In a Nasty-Powerful-Intelligence scenario, there is no guarantee that
suicide would be a possible escape for sentients at the butt end of the
Nasty-P-I's malice. Why would a nasty-intelligence let its subjects escape
through suicide, any more than a dancing bear's owner would let the bear
kill itself?

== other negative utility models don't work, either ==

One typical model of negative utility that comes up is that only suffering
counts, or that it counts more than pleasure. This model of utilitarianism
is appealing, in that it avoids the Slavery-pitfall from which the standard
aggregate-utility model suffers.

That is: the standard model says that it's ok to make one slave suffer 10
points if the slave brings 20 points of happiness to their master(s).

The suffering-utility model avoids this. However, if you count
auto-utility, it makes it immoral to clean one's own bathroom (delayed
gratification), which is absurd. It's also immoral to work to buy your
partner a gift, (suffering for greater gain) which is also absurd, although
the neg-utility advocate can say that decisions to suffer don't count.
There's still a problem in getting someone to pass you the salt, since they
are suffering and the reward doesn't count, or is discounted.

Another, contested implication of neg-utility models is the pinprick
argument http://www.utilitarianism.com/pinprick-argument.html

A much more sensible approach, which gets the benefits of neg-utility, but
doesn't suffer its pitfalls, is maxi-min utilitarianism, where the goal is
to make the least-well-off person in a group as happy as possible. This
model also has flaws, but they're not /quite/ as absurd.

>

... And another response to a different topic w/in that post:

>On 2/24/06, Ben Goertzel <ben@goertzel.org> wrote:
>
> > Peter, two points:
> >
> > 1)
> > Eliezer has sometimes proposed that a Singularity not properly planned
> > with regard to Friendly AI is almost certain to lead to human
> > extinction. But this has not been convincingly argued for. He has
> > merely shown why this is a significant possibility.
>
>Human extinction might be a likely outcome. I was speaking of
>extinction of life, which I regard as a definitely bad thing, and an
>unlikely outcome.

Hmm... Pan-Computronium seems a fair-likely outcome to me for any scenario
involving an "unwise" seed-AI. Pan-Computronium, would seem to imply the
appropriation of the biosphere's carbon supply.

Sounds to me as though the rest of life is just as likely to get the boot as
humans are...

Not that I'm sure that UFAI



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT