From: Byrne Hobart (firstname.lastname@example.org)
Date: Sun Sep 07 2008 - 17:58:10 MDT
> After suggesting in a previous post  that AIs who want to cooperate with
> each other may find it more efficient to merge than to trade, I realized
> that voluntary mergers do not necessarily preserve Bayesian rationality,
> that is, rationality as defined by standard decision theory. In other words,
> two "rational" AIs may find themselves in a situation where they won't
> voluntarily merge into a "rational" AI, but can agree merge into an
> "irrational" one. This seems to suggest that we shouldn't expect AIs to be
> constrained by Bayesian rationality, and that we need an expanded definition
> of what rationality is.
I think this is on the wrong track. There doesn't seem to be any difference
between one 'irrational' agent and two agents with one label. e.g. if
turns-to-paperclips merges with turns-to-staples, and creates an entity that
is turns-to-paperclips *or* turns-to-staples, if one of these is more likely
than the others, but is *neither* if this likelihood is known before the
merger, we have a problem: this entity cannot exist if the odds of success
for these two goals are known to be different, but if it *does* exist, it
won't take action until those odds change.
It looks to me like your AIs with contradictory goals (rather than merely
orthogonal goals) do not have any good reason to merge, and in addition have
every reason to be suspicious of one another when presented with a trade.
I think we might study such AIs by assuming that there are two possible
relationships between them: war, and temporary cease-fire. Given that a
merger guarantees zero utility to at least one party, it's probably best
treated as a very clever kind of attack.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT