Re: [sl4] Bayesian rationality vs. voluntary mergers

From: Wei Dai (weidai@weidai.com)
Date: Mon Sep 08 2008 - 14:09:39 MDT


Eliezer Yudkowsky wrote:
> The obvious solution is to integrate the coin into the utility
> function of the offspring. I.e., <coin heads, paperclips> has 1 util,
> <coin tails, paperclips> has 0 utils.
>
> Obvious solution 2 is to flip a quantum coin and have a utility
> function that sums over Everett branches. Obvious solution 3 is to
> pick a mathematical question whose answer neither AI knows but which
> can be computed cheaply using a serial computation long enough that
> only the offspring will know.

I think the fix isn't so easy. Solutions 1 and 3 don't work in the scenario
I described in my reply to Tim Freeman. Solution 2 doesn't work if the
universe turns out not to be Everettian.

> I presume you're localizing the difference to the priors, because if
> the two AIs trust each other's evidence-gathering processes, Aumann
> agreement prevents them from otherwise having a known disagreement
> about posteriors. But in general this is just a problem of the AIs
> having different beliefs so that one AI expects the other AI to act
> stupidly, and hence a merger to be more stupid than itself (though
> wiser than the other). But remember that the alternative to a merger
> may be competition, or failure to access the resources of the other AI
> - are the differences in pure priors likely to be on the same scale,
> especially after Aumann agreement and the presumably large amounts of
> washing-out empirical evidence are taken into account?

There may be another alternative, which is merger into a *non-Bayesian* AI,
which somehow has beliefs that causes it to try both combinations. (By
"somehow" I'm trying to indicate I don't know how this would work
mathematically.) Both of the original AIs would prefer this alternative, if
it is feasible.

In the example I gave, there is no Bayesian AI that both originals would
agree to merge into. In real life, as you suggest, differences in pure
priors may not be large enough so that there is no Bayesian AI that both
originals can agree to merge into. But still, there may be a non-Bayesian AI
that both originals would prefer over the Bayesian AIs.

> I haven't read this whole thread, so I don't know if someone was
> originally arguing that mergers were inevitable - if that was the
> original argument, then all of Wei's objections thereto are much
> stronger.

I did argue that mergers may be inevitable in the original thread, and now
realizing that mergers can produce "irrational" AIs, I am arguing that such
"irrational" AIs may be inevitable.
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT