Re: SI Jail

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 26 2001 - 02:02:42 MDT


Marc Forrester wrote:
>
> Apologies if this has been covered, but it doesn't appear to be in my
> archive anywhere..
>
> The question I have about all of these discussions is not whether it is
> practically possible to keep an SI jailed, but rather whether it is
> practically possible to create a jailed SI in the first place. If you kept
> a human in an equivalent state of impotent isolation from birth, all that
> would develop in their brain would be an unhappy, autistic navel-gazing
> 'mind' with no ability to function in the outside world or communicate with
> anyone but its keeper. What would be the point?

The assumption I've been using is a controlled ascent followed by an
externally initiated shutdown after hard takeoff has unambiguously begun
but before the AI is transhuman, transfer to black box hardware, and
reinitiation. Presumably this gives you an SI-in-a-box. How long it
remains in the box is the question being asked. Personally, I wouldn't be
really surprised if the effect of launching a hard takeoff in a black box
and the effect of launching a hard takeoff in a nanotechnology lab turned
out to be basically the same. They're both ultimately just configurations
of atoms, after all, and because we name one a "jail" and one a
"nanotechnology lab" doesn't mean that the names are relevant. They may
just be tall fence posts standing in the middle of a vast field.

An SI, when created, becomes the center of the Universe. The most
intelligent thing around is always the center of the Universe. Imagine,
if chemicals could talk, their strategy for the safe development of
intelligence in a black box - drop RNA on a planet, where the force of
gravity will keep it down. An instant later, the RNA evolves into humans,
who build spaceships, and pop right off the planet. What I'm saying is
that even if you built an SI into a black box with absolutely no escape
holes, no input or output to the outside world, I still think that in the
due course of time - i.e., immediately, by human standards - the SI would
escape.

I've thought of at least one plausible method an SI could use to affect
our world from a total black box. It's easy enough to prevent, of course,
*if* you think of it in advance. Nothing magical about it... just a
clever hack. But I haven't seen anyone here think of it yet. And that's
rather the problem, isn't it?

Anything smarter than you are *is* magic. Pure and simple. It can just
hose you for no reason you anticipated, maybe even no reason you can
understand.

> Intelligence requires extelligence. How do you grow a usefully intelligent
> mind without giving the developing seed the ability to explore and play with
> the world around it in ways rich and complex enough to afford it immediate
> and total freedom the instant it achieves hard take-off?

Different species, different environments. As a programmer, I can attest
that the World of Source Code contains more interesting complexity than
anything I've ever encountered, unless it be the landscape of my own
mind. For a seed AI, of course, those two are the same thing.

Bear in mind also that the fact that humans are designed to grow up in a
rich environment most certainly does not prove that the same must hold of
minds-in-general; see "The Adapted Mind" by Tooby and Cosmides.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT