Re: [sl4] Bayesian rationality vs. voluntary mergers

From: Rolf Nelson (
Date: Sun Oct 05 2008 - 20:19:59 MDT

On Sun, Sep 7, 2008 at 3:36 PM, Wei Dai <> wrote:

> This seems to suggest that we shouldn't expect AIs to be constrained by
> Bayesian rationality, and that we need an expanded definition of what
> rationality is.

I'm unbothered because I don't expect every possible AI to be completely
constrained by rationality. Rationality is merely a normative target. If
computational resources are limited, or if there's a bug, or if the AI was
constructed as a compromise because you couldn't construct the actual AI you
actually most wanted, then the AI might not be rational.

I see "construct an irrational agent" as one intermediate point in a broad
spectrum of outcomes. I don't expect every possible meeting of rational
agents to end in "construct a new rational agent to arbitrate". Two rational
agents might meet and construct a rational agent, or they might construct an
irrational agent, or they might blow each other up because they were in a
Hobbesian Trap, or they might bake a cake that neither likes because some
odd brinksmanship occurred within the game theory that governed their
complex interactions.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT