Re: My doubts about Libertarianism and volitional morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 11 2003 - 14:55:24 MDT


Marc Geddes wrote:

> I've recently realized what I think the major problem
> with Libertarianism and volitional morality is.
> [snip]
> In fact, I fear that the problem of incomplete
> information throws the whole theory of volitional
> morality into doubt. People cannot be said be making
> free choices when full relevant information is not
> available.
> [snip]
> I think the problem of incomplete information is a far
> greater problem for Libertarianism and volitional
> morality than I had first thought.

You're confusing Libertarianism and volitionism. They are quite different
things.

Volitionism is a philosophy whose prime purpose is to explicitly raise and
address challenges like incomplete information, the fact that people's
moral codes change over time, the inability of people to predict their own
preferences, preference reversals, nontransitive preference orderings,
duration insensitivity, and so on. "How, given a human, do you say what
he or she 'wants'?" is the FAI-complete question that volitionism is meant
to address.

In the case of incomplete information, the problem is relatively
straightforward. The von Neumann-Morgenstern expected utility equation is:

D(a) = Sum over all x: U(x)P(x|a).

If your decisionmaking obeys a few plausible-sounding axioms (which,
naturally, real people violate all over the place), the desirability of an
action will equal the sum, over all X of interest, of "the utility of X,
times the probability of X given that the decision system chooses A".

Suppose we assume that X has an objective frequency given A, F(x|a). If
the subjective frequency assigned by the person to P(x|a) doesn't match
the objective frequency F(x|a), then to a first approximation we can say
that the subject's "volition" as a moral desideratum should be computed
using U(x)F(x|a), while the subject's actual decisions will in fact be
computed using U(x)P(x|a). In other words, your "volition" is an abstract
entity which your actual decisions only approximate; your volition is the
decision you would make if you had perfect information.

Really people are a lot more complicated than this, so for Belldandy's
sake don't go plugging the above into a Friendly AI. People have complex
structure in the utility computation U(x), which is something that most
accounts of expected utility don't take into account at all. All kinds of
assumptions, such as the time invariance of U(x) or separability of U(x)
or perfect knowledge of U(x), have been invisibly glossed over by one
philosopher or another. But for the particular question of incomplete
information affecting the binding of ends to means, the construction of
volition is *relatively* straightforward; the volition is the decision
that people would make if they had the missing information.

It rapidly gets more complex, but for the *particular case* of the
means-end mismatch you cite:

> Suppose I pointed to shelf of boxes and said that in
> one box was millions of dollars, and the other boxes
> were empty. I then allowed someone to 'choose' which
> box they wanted, telling him or her that if they pick
> the box containing the money they can keep it. They
> pick a box - it's empty. All fair? Suppose that it
> turned out that the box with the money had been
> deliberately placed in shadow, so that the person
> doing the choosing didn't see it. So I didn't lie
> exactly, but clearly information was concealed from
> the person doing the choosing.

Here the missing information is clear; the "correct" choice that the
person's decision system is trying and failing to approximate is clear;
the math involved is clear (the person's P($1M|box13) is 0.05, while
F($1M|box13) is 1, and the person's decision process converges to box13 as
P converges to F); and nobody is likely to object if you point out the
correct box to them, or even if you preemptively choose the "correct" box
for them on the basis of your more informed world-model and your guess as
to their utility function; though they might start to worry if you made a
habit of it.

Regardless of which box the person chooses, the person wants the box with
the money, and in a volitional sense can be said to "want" box 13 even
though no specific representation of box 13 appears in the person's mind.
  The person is not likely to object to your construal of that abstract
fact, or to your assistance in obtaining the goal, at least if you're an
ordinary human-level friend. And this answer is nicely consonant with
both our intuitions about what it means to help someone, and our intutions
about what it means to be helped.

So the *particular* problem you cite is straightforward in volitionism.
Obviously there are more complicated ones.

> Just how far do you carry volitional morality?

I don't know that I would say you can draw the line wherever you like, but
since you are yourself and I am not, you *will in fact* draw the line
wherever you choose, whether I like it or not. You may decide that
homosexuality is immoral, for example. But why would I, as a third party,
pay attention to you, as a fourth party, saying that you've decided it's
wrong, if the first and second parties, Susan and Debby, decide to get
married, and they both appear happy with their choices? How can
desiderata that violate volitional morality take wings and become
transpersonal?

> Suppose a group of people decided that they wanted to
> start making snuff movies in the middle of the road.
> Every person in the group agrees to it of their own
> free will (the subjects are happy to commit suicide).
> Volitional morality is obeyed. Should we let them
> start making the snuff movies in the street? To me
> the answer is an obvious no, and in this extreme
> example volitional morality looks absurd.

Why is the answer obviously no? Who are you to determine whether someone
else lives or dies? I don't think the answer is obviously yes, but
"obviously no"?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT