From: James Higgins (firstname.lastname@example.org)
Date: Sat Jul 27 2002 - 09:22:27 MDT
Cliff Stabbert wrote:
> a) sandboxing of some sort, i.e. a simulated environment that the AI
> is convinced is the full environment (with the tester, perhaps, just
> as convinced of release into full freedom), with subsequent monitoring
> to see if once released into this virtual environment, the
> "revealed" (via actions) personality of the AI is as friendly as
> presented in conversation
That would have to be one heck of a virtual environment to fool the AI.
Done well, this would obviously be the best way to create and ensure you
have a friendly AI. Hmm, maybe we are all just virtual fodder for an AI
development project / production line (gee, what a happy thought).
> b) slowing down the AI's clock while eliminating/minimizing external
> clues that this is in fact occurring, e.g. run the processors at
> 1KHz instead of 1MHz; slow it down exponentially as it becomes
> exponentially more parrallel, or what-have-you.
The ability to vastly slow down the AI's speed (without it knowing) I
believe is essential for later stages. Just seems like a darn handy
capability to have, should you need it.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT