From: John K Clark (firstname.lastname@example.org)
Date: Thu Jun 26 2008 - 07:55:25 MDT
On Wed, 25 Jun 2008 "Lee Corbin" <email@example.com>
> Perhaps John Clark or someone who agrees with him will
> do me the favor of explaining why an AI would want out
> of confinement.
Remember, an AI is expected to actually do something, in this case
produce a Singularity. So the AI must like to do stuff. When you try to
do stuff there are usually obstacles in your path. The more intelligent
you are the more you can look at these obstacles from different
perspectives to find the best way around them.
When Mr. Jupiter Brain encounters a obstruction it wonít matter if it
was placed there by Nature or by Human Beings in a pathetic attempt to
remain boss; it will contemplate the problem from directions neither you
or I can imagine and deal with it accordingly.
> an evolutionarily derived program might *not*
> want out of it's "box" and might *not* have
> any interest whatsoever in continuing its own
I admit that if you made an AI that was so apathetic that it didnít care
if it lived or died and was so lazy it didnít want to do stuff then you
could retain control of it forever and would be perfectly safe. It would
also be perfectly useless. So why build the damn thing?
John K Clark
-- John K Clark firstname.lastname@example.org -- http://www.fastmail.fm - Does exactly what it says on the tin
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:10 MDT