Re: [sl4] trade or merge?

From: CyTG (cytg.net@gmail.com)
Date: Thu Jul 17 2008 - 01:13:03 MDT


If the intelligence of an selfimproving AI is on the rise, exponentially,
why would it ever wanna 'merge' with another AI?
Or are we imagining that there's an upper limit to the capacity of an AI
where it makes a case of diminishing returns to throw more hardware at it,
that the design simply cannot(even hand crafted by an uber AI) scale beyond
some level. In extension of that we imagine that it might make sense to
cluster these things to go beyond the single-ai instance performance.
Is that what we're talking about here? :-)

On Thu, Jul 17, 2008 at 3:30 AM, Wei Dai <weidai@weidai.com> wrote:

> Consider two (contemporary) corporations that want to cooperate with each
> other for mutual gain. They have two general options, trade or merge, and
> the choice can be seen as a tradeoff between different kinds of overhead
> costs. For trading, these include bargaining, contract enforcement, and
> missed opportunities due to asymmetric information. Merging avoids some of
> these, but increases agency costs (see
> http://en.wikipedia.org/wiki/Agency_cost).
>
> It occurs to me that two AIs who want to cooperate with each other face the
> same choices, but merging might be a much more attractive option for them.
> In case it's not obvious, "merging" here means something like creating a
> third AI that will try to optimize a weighted average of the two AI's
> utility functions, and then transferring all information and physical assets
> to this new AI. The reason that merging is more attractive is that trading
> still retains the same transaction costs, but merging doesn't incurs
> perpetual agency costs. Instead, each of the original AIs only need to
> verify during initial construction of the new AI that it will in fact try to
> optimize the agreed upon utility function. (This seems much easier than
> proving or verifying what source code an existing AI is running.)
>
> If this analysis is correct, it may be that any society of AIs will
> voluntarily merge into a singleton, in order to maximize gains from
> cooperation. This singleton will then try to maximize a weighted average of
> all of the original AIs' utility functions.
>
> What about other kinds of minds, for whom merging may not be possible?
> Well, if they are capable of self-modification, a group of such minds can
> agree to modify themselves to each maximize a common combined utility
> function, and that should work just as well as merging, as long as the
> agreement can be enforced and verified (say with the help of a trusted third
> party). This seems to imply that the ability to self-modify will lead to
> voluntary borgification, but this process may stop short of a singleton
> (because when there are only two such "borgs" left, who will act as the
> trusted third party?)
>
> This line of argument may seem to depend on the assumption that
> intelligence==optimization process, which is something I've previously
> argued against [1]. However, it may be that an intelligence, once it
> satisfies all of its goals that can not be modeled as an optimization
> process (such as finding answers to the philosophical questions in [1]), the
> working on remaining goals can be seen as an optimization process.
>
> (For nit-picking decision theorists, please take "maximize combined utility
> function" to mean "maximize expected utility under a linear combination of
> the individual priors and a linear combination of the individual utility
> functions.)
>
> [1] http://www.nabble.com/answers-I%27d-like-from-an-SI-td14007499.html
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT