Re: SI Jail

From: Justin Corwin (thesweetestdream@hotmail.com)
Date: Tue Jun 26 2001 - 01:58:01 MDT


mark raises an important point. if you're jailing an AI from turn-on, it either is a hard-coded AI, with pre-encoded knowledge structures(which seems ridiculously difficult for a first AI) or a seed AI, in which case, it will need to have an entire "inprisoned world" to interact with. perhaps a snapshot of the Web, and a decent amount of literature to read, and a modeled 3d world to interact with? this would also perhaps provide a forum to "meet the ai". in it's 3d world, you could send a messenger avatar to "appear", and talk to ver.
also, providing such a stratified "world" for it to interact with also has other advantages. it could grow up unaware it's interacting with a simulation. after all, it only has the senses we decide to give it. we can always tell it we're moving it's home equipment to a new location, turn it off, turn it back on, and hook it up to the "real sensors" instead of a simulation when we decide to let it free.

another issue i'd like to raise, is a lot of people are using the imagery of a locked box to confine the AI. it's not as if it's in a locked room, where you have to go in to talk to it. if i have an untested ai running, i could be resting my hand on top of it's processor chip, and nothing would happen. (except my hand would either be burned or frozen, depending on how the chip was cooled. nitrogen all over, ouch, let me get a towel..) you could have multiple people at the 'interaction point' be it in front of the ai's camera, at the terminal, or puppeting a 3d model into the ai's "physical world". an ai is isolated just by the i/o ports we decide to give it. although it probably wouldn't be a good idea, i could run the ai on my athlon right now, so long as i unplugged my computer from my ethernet connection. it's not as if it being "physically free" gives it any more power than being in a locked room. (though it would probably be a good idea not to run it off commercial power, as it might try to use the power lines as a communications medium. )

also, a team of interacters would make it more difficult for the AI to maliciously influence them, particularly if there are silent members, that the AI knows nothing about(or at least is told nothing explicit). because it may be smarter than us, but it can be hard us to track even ants, in sufficient numbers. or so i would seem to think. it would be difficult to put together a team that doesn't include a weak human, but given sufficient precautions, and a few observers that don't interact with the ai, and some external security, it seems to me we could safely ascertain it's intentions. it's not magical, after all, even as intelligent as it is, it can't surmise things from nothing.

justin

----- Original Message -----
From: Marc Forrester
Sent: Tuesday, June 26, 2001 1:05 AM
To: sl4@sysopmind.com
Subject: Re: SI Jail

Apologies if this has been covered, but it doesn't appear to be in my
archive anywhere..

The question I have about all of these discussions is not whether it is
practically possible to keep an SI jailed, but rather whether it is
practically possible to create a jailed SI in the first place. If you kept
a human in an equivalent state of impotent isolation from birth, all that
would develop in their brain would be an unhappy, autistic navel-gazing
'mind' with no ability to function in the outside world or communicate with
anyone but its keeper. What would be the point?

Intelligence requires extelligence. How do you grow a usefully intelligent
mind without giving the developing seed the ability to explore and play with
the world around it in ways rich and complex enough to afford it immediate
and total freedom the instant it achieves hard take-off?<br clear=all><hr>Get your FREE download of MSN Explorer at http://explorer.msn.com<br></p>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT