From: Stuart Armstrong (firstname.lastname@example.org)
Date: Thu Jul 17 2008 - 03:40:23 MDT
It's probably mathematically doable to model the trade-off of
cooperation versus merging. If the utilities of the AI's are very
different, then merging becomes less attractive, especially if the
AI's give a different moral weighting to different entities.
For instance, it is perfectly reasonable to prefer being president of
the united states to co-president of a world government, if one
considers that americans have greater moral standing that foreigners.
Even if there is a common interest in merging, there remains one thing
poorly captured by game theory, and that is the process of
negotiation. A purely game theoretic rational AI (Adam) might propose
a merging of equals. An AI with skills in negotiations (Bertrand)
would propose a weighted merging of utility functions, weighted to
give it a great advantage, but still leaving Adam better off than not
making the merger. It would rewrite its own code to ensure that it
would never accept anything less than this deal. Adam would then
accept the deal, knowing it would not get a better one.
So a negotiating AI, one capable of permanently turning down an
advantageous deal, would be in a better position than one that could
not. Rational irrationality.
Assume now that Adam also negotiates. Then the whole procedure become
one of comparing goals and negotiation strategies. Unlike goals,
negotiating strategies can be created and modified quite easily; there
will generally also be probabilistic aspects to the strategy. Because
of the complexity of negotiations, it is perfectly possible for both
AI's to turn down an advantageous deal.
Why is this relevant here? After all, both merging or trading involve
negotiations, so in what way do they differ? Simply that merging
happens once, while trading happens constantly; if the AI's use
probabilistic negotiation strategies, then their expected gains will
roughly be their actual gains for trading, while they may be very
different for merging.
Depending on how the utility functions are set up (their risk
averseness, from the human perspective), the AI's may prefer one
option over the other.
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:43 MDT