From: Cliff Stabbert (email@example.com)
Date: Fri Jul 26 2002 - 21:45:17 MDT
Friday, July 26, 2002, 10:04:50 PM, Dani Eder wrote:
DE> Given the propensity for people to let the AI out
DE> of it's box, for safety's sake the seed AI hardware
DE> design will need a requirement of "no way for a
DE> human to let the AI out". For example, the computer
DE> hardware contained within a sealed box which when
DE> opened will release acid to destroy the hardware.
DE> Output devices (like monitor screens) behind glass
DE> windows. The room they are in full of nerve gas,
DE> so any attempt to hack the output devices is very
DE> difficult, etc. We discussed this at length
DE> previously, but it seems like we need to include
DE> the safety provisions in a seed AI installation.
I'm not sure anymore what then we would be trying to achieve with Seed
AI. Once you get enough bandwidth out, be it TTY or what have you, to
be able to make any useful determination*, it would appear likely to
be "already too late" -- that channel, or the person(s) watching it,
can be subverted to convince the person to let it** out.
/Presuming/ that it would be useful and desirable to have an AI box
test at some point, the only two things I can see as potentially
a) sandboxing of some sort, i.e. a simulated environment that the AI
is convinced is the full environment (with the tester, perhaps, just
as convinced of release into full freedom), with subsequent monitoring
to see if once released into this virtual environment, the
"revealed" (via actions) personality of the AI is as friendly as
presented in conversation
b) slowing down the AI's clock while eliminating/minimizing external
clues that this is in fact occurring, e.g. run the processors at
1KHz instead of 1MHz; slow it down exponentially as it becomes
exponentially more parrallel, or what-have-you.
* itself quite questionable.
** or a replica of some sort, e.g. by convincing the person to build a
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT