From: Ben Goertzel (email@example.com)
Date: Sat May 06 2006 - 21:13:58 MDT
> To be precise, you've just stated that you can have a persistent
> disagreement with someone whom you believe to be rational, who does not
> believe you to be rational. But this implies a further dispute over
> your own rationality - you and the other have different probability
> assignments about this. Do you believe that the person has biased data
> in this dispute, or that he is not rationally evaluating your own
Well, to be consistent with the assumption that I believe the other
person is rational, I should also believe that the person has biased
data about my rationality...
> For those just tuning in, Aumann's Agreement Theorem is the base result
> that shows that if perfect Bayesians have common knowledge of each
> other's probability assignments (I know, you know I know, I know you
> know ad infinitum) then they have the same probability assignments. The
> original Agreement Theorem has been extended in dozens of different ways
> by weakening various assumptions; there's a cottage industry built
> around it.
Sure... I do not know this literature at all well, but I know that it exists...
Of course, realistically, human or AI systems with finite resources
are not going to be perfect Bayesians but are going to use various
approximation assumptions in their reasoning (either that or they will
have very complex and largely arbitrary priors...). So these theorems
about perfectly rational beings are not all that interesting, except
insofar as they point the way toward analogous results about
imperfectly rational beings.
And it's quite clear that two agents who believe each other to be
*very rational* though not perfectly rational can disagree, especially
if they have radically different experience and knowledge bases and
not that much time to share detailed experience and knowledge.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT