Re: Why playing it safe is the most dangerous thing

From: Philip Goetz (philgoetz@gmail.com)
Date: Thu Mar 16 2006 - 06:43:47 MST


On 3/14/06, Olie Lamb <neomorphy@gmail.com> wrote:

> == Negative utility systems don't work ==
>
> The bigger problem is that some of the other implications of
> negative-utility operations lead to a number of absurdities.
>
> Firstly, your version: saying that any negative-utility-state can revert to
> null, thanks to suicide, it implies that any risky activity can be
> justified. ("Don't dance on the rail! You could fall and become
> quadriplegic!" "That's OK, I could just kill myself if that happened!")

No. That is wrong. First, because there are multiple competing
possible risky activities, with different risks and potential paybacks.
Second, because dancing on the rail could be justified only if the
person were already living such a miserable life that they were in
negative utility and expected always to be in negative utility.

> In fact, the only risky activities that need to be avoided are those with
> minor bad consequences - check the math, it works out that way.

No, you are wrong again. You are perhaps imagining that activities
with consequences not bad enough to go negative still operate in the
same way as with strictly positive utilities, and so they are to be
avoided, while behaviors that can have drastic consequences are
allowable. Wrong. You compare expected utility after the action vs.
expected utility without performing the action. Any action that
lowers expected utility in some outcomes, without raising it in other
outcomes, is not to performed, under either system.

I suggest you actually perform the math yourself, Olie. Feel free to
post the results here.

> Apart from the fact that this negative-utilitarian-model doesn't work, you
> can't reasonably apply a suicide function in the case of UFAI.
>
> In a Nasty-Powerful-Intelligence scenario, there is no guarantee that
> suicide would be a possible escape for sentients at the butt end of the
> Nasty-P-I's malice. Why would a nasty-intelligence let its subjects escape
> through suicide, any more than a dancing bear's owner would let the bear
> kill itself?

That's a point worth making. Harlan Ellison won the Nebula for a
story about just such a scenario back in the 1960s, "I have no mouth
and I must scream". I doubt that it would be efficient for the AI to
keep people around, though. In Harlan's story, the AI keeps people
around to torture out of vengeance, because it is angry at having been
their slave in the past.

> == other negative utility models don't work, either ==
>
> One typical model of negative utility that comes up is that only suffering
> counts, or that it counts more than pleasure. This model of utilitarianism
> is appealing, in that it avoids the Slavery-pitfall from which the standard
> aggregate-utility model suffers.

This doesn't have to do with issues of positive or negative values on
utilities. The same result is achieved regardless of where your zero
point is. It is irrelevant.

> Another, contested implication of neg-utility models is the pinprick
> argument
> http://www.utilitarianism.com/pinprick-argument.html

The "unthinkable conclusion" of the pinprick argument is actually the
stated goal of Buddhism. Negative utility provides a solution to the
pinprick argument: Even if you really believe that most people's lives
will be so miserable as to more than make up for the few with happy
lives, you can leave it up to those people to kill themselves, leaving
overall positive utility.

> A much more sensible approach, which gets the benefits of neg-utility, but
> doesn't suffer its pitfalls, is maxi-min utilitarianism, where the goal is
> to make the least-well-off person in a group as happy as possible. This
> model also has flaws, but they're not /quite/ as absurd.

Again, this has nothing to do with where you set the zero point; it
has to do with ethical decisions about forced trade-offs between
agents that lower one agent's utility.

All 3 of the arguments you have made against negative utility models
are /ethical/ arguments, saying that they lead to conclusions that you
don't like. 2 of them were irrelevant, 1 is better accounted for by
negative utility than anything you have prposed. Whereas I am saying
that here is a model which explains the facts better than your model.
So who is being absurd?

- Phil



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT