Re: Darwinian dynamics unlikely to apply to superintelligence

From: Wei Dai (weidai@weidai.com)
Date: Fri Jan 02 2004 - 22:18:19 MST


On Fri, Jan 02, 2004 at 10:31:32PM -0500, Eliezer S. Yudkowsky wrote:
> Or at least, not heritable variations of a kind that we regard as viruses,
> rather than, say, acceptable personality quirks, or desirable diversity.
> But even in human terms, what's wrong with encrypting the control block?

The behavior of a machine depends on both code and data, in other words on
its control algorithms and on its state information. You can protect the
code but the data is going to change and cause differences in behavior.

> Humans are hardly top-of-the-line in the cognitive security department.
> You can hack into us easily enough. But how do you hack an SI
> optimization process? What kind of "memes" are we talking about here?
> How do they replicate, what do they do; why, if they are destructive and
> foreseeable, is it impossible to prevent them? We are not talking about a
> Windows network.

I don't know what kind of memes SI components would find it useful to
exchange with each other, but perhaps precomputed chunks of data,
algorithmic shortcuts, scientific theories, information about other SIs
encountered, philosophical musings, etc. This so called "SI optimization
process" is of course actually a complex intellect. How can you know that
no meme will arise in ten billion years anywhere in the visible universe
that could cause it to change its mind about what kind of optimization it
should undertake?

Your question "why, if they are destructive and foreseeable, is it
impossible to prevent them" makes you sound like you've never thought
about security problems before. It's kind of like asking "why, if the Al
Queda are destructive and foreseeable, is it impossible to prevent them?"
Well, it may not be impossible, but doing so will certainly impose a cost.

> Okay, so possibly a Friendly SI expands spherically as (.98T)^3 and an
> unfriendly SI expands spherically as (.99T)^3, though I don't see why the
> UFSI would not need to expend an equal amount of effort in ensuring its
> own fidelity.

Because the UFSI has a bigger threat to deal with, namely the FSI. And the
FSI, once it notices the UFSI, also has a bigger threat to deal with and
would be forced to lower its own efforts at ensuring fidelity.

> Even so, under that assumption it would work out to a
> constant factor of UFSIs being 3% larger; or a likelihood ratio of 1.03 in
> favor of observing UFSI (given some prior probability of emergence); or in
> terms of natural selection, essentially zero selection pressure - and you
> can't even call it that, because it's not being iterated. I say again
> that natural selection is a quantitative pressure that can be calculated
> given various scenarios, not something that goes from zero to one given
> the presence of "heritable difference" and so on.

This analysis makes no sense. If you have two spheres expanding at
different rates, one of them is eventually going to completely enclose the
other, and in this case cutting off all growth of the Friendly SI. And
that doesn't even take into consideration the possibility that the UFSI
could just eat the FSI.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT