Re: Adaptation brings unFriendliness

From: Michael Wilson (starglider@bitphase.com)
Date: Mon Nov 13 2006 - 12:43:10 MST


On 13 Nov 2006 at 21:06, Joshua Fox wrote:
> If multiple near-AGIs emerge, then basic Darwinian arguments show that the
> one that reproduces itself the best will have the most copies; and
> mutations favoring survival will spread. (Reproduction here means building
> the next generation of technology, based on the previous system and perhaps
> with its help.)

What this competitive landscape would look like is extremely difficult to
make strong predictions about. A lot of it depends on your definition of
'multiple'. For larger numbers of AIs, co-operation becomes more and
more viable (and eventually, essential) for competing against AIs at a
similar level of development. That said, the only way I can see this coming
about is through a badly designed AGI splintering while undergoing takeoff
(which is fairly plausible if it's Internet-distributed).

> Yet clearly mutations that involve destroying and/or using the resources of
> (potential) competitors are often adaptive. Thus, AGIs that are not only
> unFriendly but downright aggressive will emerge.

There isn't /necessarily/ any correlation between how the AGI treats other
AIs and how the AGI treats humans (more properly a distinction between
classes of sapient and sapient goal systems). A well designed goal
structure can differentiate between these, but an 'emergent' one may do
almost any kind of generalisation. While analogies to humans are usually
ill-advised, I'd note that neither human co-operative nor competitive
drives extend to ants; generally we just wipe them out when they get in
the way. Humans would get in the way a lot during a nanowar (and any
AGI trying to save /physical/ humans would probably be at a huge
disadvantage).

> I suppose that in a hard takeoff, the leading AGI could gain so much
> (possibly Friendly) power to make all questions moot.

Well, it would make external competitive dynamics irrelevant. I'd guess
that (unstructured) internal competitive dynamics tend to get stripped
out once deliberative self-modification becomes available, there almost
certainly are metastable goal systems that never settle down to a
coherent optimisation targets, but I suspect they're rare.

> But otherwise, doesn't the above suggest that we do have some idea of
> the direction in which AGIs will tend to develop within the space of
> possible intelligences, and that it's not a good one?

Somewhat, but unFriendliness from a single AI that simply ends up with
human-incompatible goals seems a lot more likely to me.

P.S. Good short story on a seed AI; http://qntm.org/transit

Many of the other stories on the same site are quite interesting.

Michael Wilson
Director of Research and Development, Bitphase AI Ltd
Web demos page : http://www.bitphase.com/apex/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT