Re: [sl4] Bayesian rationality vs. voluntary mergers

From: Tim Freeman (tim@fungible.com)
Date: Mon Sep 08 2008 - 07:01:14 MDT


From: "Wei Dai" <weidai@weidai.com>
>The problem here is that standard decision theory does not allow a
>probabilistic mixture of outcomes to have a higher utility than the
>mixture's expected utility, so a 50/50 chance of reaching either of two
>goals A and B cannot have a higher utility than 100% chance of reaching A
>and a higher utility than 100% chance of reaching B, but that is what is
>needed in this case in order for both AIs to agree to the merger.

For what it's worth, the period of time when the merged AI is acting
irrationally is very short. Before it gets all of the assets from A
and B, it's waiting to get all of the assets. After it gets all of
the assets from A and B, it flips the unfair coin and after that point
it's rational. So the only period of time when its irrationality is
observable is when it's choosing to flip the coin and commit to one
plan or the other based on the result rather than simply pursue the
plan that is more likely to succeed.

Everyone knows that political decision making tends to result in
irrational behavior. It's good to have a simple model of that, and
it's good to have an illustration within the model where the return to
rationality is quick. In some sense, much of Friendly AI problem
seems to be to convert the political process of a bunch of humans into
a rational whole. We have two issues here:

1. The humans aren't rational to start with.

2. Even if they were, the sort of negotiation you describe would have
to happen at some point, and there's some inevitable irrationality in
the middle of that.

Thanks for the insight.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT