Re: [sl4] trade or merge?

From: Byrne Hobart (bhobart@gmail.com)
Date: Wed Jul 16 2008 - 22:33:39 MDT


What if your AIs operate with different paradigms? Imagine that one is a
collective utility-maximizer that ignores all other rights, and the other is
maximizing its own utility, but doing so without violating the property
rights of others. It seems that these utility curves can't be averaged
together, because utility to one is disutility to another (e.g. the first AI
would consider it good to steal food from the rich to give to the poor; the
second AI would consider this vile, but wouldn't mind lending the poor money
at high interest rates, or giving them very low-wage employment).

I suspect that AI's utility curves will not be as simple as ours.

On Wed, Jul 16, 2008 at 9:30 PM, Wei Dai <weidai@weidai.com> wrote:

> Consider two (contemporary) corporations that want to cooperate with each
> other for mutual gain. They have two general options, trade or merge, and
> the choice can be seen as a tradeoff between different kinds of overhead
> costs. For trading, these include bargaining, contract enforcement, and
> missed opportunities due to asymmetric information. Merging avoids some of
> these, but increases agency costs (see
> http://en.wikipedia.org/wiki/Agency_cost).
>
> It occurs to me that two AIs who want to cooperate with each other face the
> same choices, but merging might be a much more attractive option for them.
> In case it's not obvious, "merging" here means something like creating a
> third AI that will try to optimize a weighted average of the two AI's
> utility functions, and then transferring all information and physical assets
> to this new AI. The reason that merging is more attractive is that trading
> still retains the same transaction costs, but merging doesn't incurs
> perpetual agency costs. Instead, each of the original AIs only need to
> verify during initial construction of the new AI that it will in fact try to
> optimize the agreed upon utility function. (This seems much easier than
> proving or verifying what source code an existing AI is running.)
>
> If this analysis is correct, it may be that any society of AIs will
> voluntarily merge into a singleton, in order to maximize gains from
> cooperation. This singleton will then try to maximize a weighted average of
> all of the original AIs' utility functions.
>
> What about other kinds of minds, for whom merging may not be possible?
> Well, if they are capable of self-modification, a group of such minds can
> agree to modify themselves to each maximize a common combined utility
> function, and that should work just as well as merging, as long as the
> agreement can be enforced and verified (say with the help of a trusted third
> party). This seems to imply that the ability to self-modify will lead to
> voluntary borgification, but this process may stop short of a singleton
> (because when there are only two such "borgs" left, who will act as the
> trusted third party?)
>
> This line of argument may seem to depend on the assumption that
> intelligence==optimization process, which is something I've previously
> argued against [1]. However, it may be that an intelligence, once it
> satisfies all of its goals that can not be modeled as an optimization
> process (such as finding answers to the philosophical questions in [1]), the
> working on remaining goals can be seen as an optimization process.
>
> (For nit-picking decision theorists, please take "maximize combined utility
> function" to mean "maximize expected utility under a linear combination of
> the individual priors and a linear combination of the individual utility
> functions.)
>
> [1] http://www.nabble.com/answers-I%27d-like-from-an-SI-td14007499.html
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT