**From:** Wei Dai (*weidai@weidai.com*)

**Date:** Tue Sep 16 2003 - 17:28:30 MDT

**Next message:**Ramez Naam: "RE: [Fwd: [>Htech] Artificial Development to Build World's Biggest Spiking Neural Network]"**Previous message:**Peter Voss: "RE: [Fwd: [>Htech] Artificial Development to Build World's Biggest Spiking Neural Network]"**In reply to:**Eliezer S. Yudkowsky: "Constructing volition (was: my doubts)"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Tue, Sep 16, 2003 at 06:09:06PM -0400, Eliezer S. Yudkowsky wrote:

*> I haven't looked into the result you refer to, Wei Dai, but my initial
*

*> impression is that it assumes infinite degrees of freedom in both U(x) and
*

*> P(x). Leaving aside the former, the latter, at least, is usually assumed
*

*> to normatively obey Bayesian probabilities, and cognitively it obeys
*

*> certain loose non-Bayesian rules. We are surprised when we see that under
*

*> certain circumstances people assign subjective likelihoods P(A&B) > P(A),
*

*> but having discovered this, we can then predict that most people will do
*

*> it most of the time under those conditions. So there are not infinite
*

*> degrees of freedom in P(x), either normatively or cognitively, and if you
*

*> use this constraint to construct U(x) you will not find infinite relevant
*

*> degrees of freedom in U(x) either. Of course I am only reasoning
*

*> intuitively here, and I may have gotten the math wrong.
*

*>From what I remember, you can always find an infinite set of pairs of
*

U' and P' that satisfy the constraints of decision theory, and P' != P.

I think in most cases you will get different volitional orderings from

these U'. I can look up the math details if you're interested.

*> But, mostly my reply is that I'm not using the economist assumption that
*

*> we have to construct U(x) and P(x) by looking exclusively at people's
*

*> choices. From a volitionist standpoint, what I would like to do is open
*

*> up people's heads and look at their mind-state, figure out what systems
*

*> are working, what they contain, what actual system is producing the
*

*> choices, and then, having this functional decomposition, ask which parts
*

*> have U(x) nature and which parts have P(x) nature, bearing in mind that
*

*> they will overlap. Existing cognitive psychology actually goes quite a
*

*> ways toward doing this.
*

So how do you determine whether the subject volitionally wants you to

open up his head and look inside?

But putting that aside, the main point I want to make is that I don't

see what difference exists between the utility function and the

probability function that makes you want to respect a person's utility

function but not his probability function. Both functions need to

satisfy certain constraints according to decision theory, but other

than that they are completely subjective and arbitrary. Why not have

the Friendly AI implement an alternative moral theory where the AI

makes decisions for a person according to his own probability function

but substituting the AI's idea of a "correct" utility function for the

subject's? That makes just as much sense to me as volitionism.

*>From the subject's perspective, both of these approaches are equally
*

bad. In both cases the AI is doing things that hurt his expected

utility. Why should the fact that the AI respects his utility function

function be more of a consolation than the fact that the AI respects

his probability function?

**Next message:**Ramez Naam: "RE: [Fwd: [>Htech] Artificial Development to Build World's Biggest Spiking Neural Network]"**Previous message:**Peter Voss: "RE: [Fwd: [>Htech] Artificial Development to Build World's Biggest Spiking Neural Network]"**In reply to:**Eliezer S. Yudkowsky: "Constructing volition (was: my doubts)"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:42 MDT
*