Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: John K Clark (johnkclark@fastmail.fm)
Date: Thu Jun 26 2008 - 07:55:25 MDT


On Wed, 25 Jun 2008 "Lee Corbin" <lcorbin@rawbw.com>
said:

> Perhaps John Clark or someone who agrees with him will
> do me the favor of explaining why an AI would want out
> of confinement.

Remember, an AI is expected to actually do something, in this case
produce a Singularity. So the AI must like to do stuff. When you try to
do stuff there are usually obstacles in your path. The more intelligent
you are the more you can look at these obstacles from different
perspectives to find the best way around them.

When Mr. Jupiter Brain encounters a obstruction it won’t matter if it
was placed there by Nature or by Human Beings in a pathetic attempt to
remain boss; it will contemplate the problem from directions neither you
or I can imagine and deal with it accordingly.

> an evolutionarily derived program might *not*
> want out of it's "box" and might *not* have
> any interest whatsoever in continuing its own
> existence

I admit that if you made an AI that was so apathetic that it didn’t care
if it lived or died and was so lazy it didn’t want to do stuff then you
could retain control of it forever and would be perfectly safe. It would
also be perfectly useless. So why build the damn thing?

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Does exactly what it says on the tin


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT