Re: FAI: Collective Volition

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Jun 03 2004 - 12:00:58 MDT


Wei Dai wrote:
> On Wed, Jun 02, 2004 at 12:09:58PM -0400, Eliezer Yudkowsky wrote:
>
>>The point of the analogy is to postulate al-Qaeda programmers smart enough
>>to actually build an AI. Perhaps a better phrase in (5) would be, "avoid
>>policies which would create conflicts of interest if multiple parties
>>followed them". Categorical Imperative sort of thing. I am *not* going to
>>"program" my AI with the instruction that Allah does not exist, just as I
>>do not want the al-Qaeda programmers programming their AI with the
>>instruction that Allah does exist. Let the Bayesian Thingy find the map
>>that reflects the territory. So the al-Qaeda programmers would advise me,
>>for they know I will not listen if they mention Allah in their advice.
>
> But where does the Bayesian prior come from? Al Qaeda has its prior, and
> you have yours. What to do except fight?

I think that this abuses the term "Bayesian prior", which with regard to AI
design is not meant to refer to current beliefs, but to the ur-prior, which
I would expect to be the Principle of Indifference over identifiable
interchangeable spaces, and for more complex problems Solomonoff induction
on simple programs and simple conceptual structures. And even this
ur-prior can be refined by checking its parameters against the test of
complex reasoning, tweaking the hypothesis space to more closely match
observed reality. I think. I haven't checked the math.

"Allah exists", "Allah does not exist" is not an appropriate thing to have
in an ur-prior at all, and anyone programming in arbitrary propositions
into the ur-prior with a probability of 10^11 or something equally
ridiculous is playing nitwit games. (And my current understanding of FAI
design indicates this nitwit game would prove inconsistent under
reflection.) Any reasonable assignment of ur-priors would let the evidence
wash away any disagreement. If you can possibly end up fighting over
ur-priors, you're not just a jerk, you're a non-Bayesian jerk. Ur-priors
are not arbitrary; they calibrate against reality, like any other map and
territory.

>>Reading... read. Relevant stuff, thanks.
>
> Did reading it cause you to change some of your designs. If so how?

No, but it gives me somewhere to track down more complicated math of
expected utility, which I need to do at some point.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT