From: James Higgins (firstname.lastname@example.org)
Date: Fri Jun 22 2001 - 20:03:22 MDT
At 04:30 PM 6/22/2001 -0600, John Stick wrote:
> In a situation similar to that posed by James Higgins, I would think
>that the key variable is not the speed of the SI, but the amount of
>information it has of its exact situation and the situation of its human
>interrogator. To let the SI know as much as Eliezer knows about Jimmy Wales
I had assumed that this would already be the case.
> I still don't understand the point of keeping a potentially unfriendly
>AI in jail and talking to it on the lowest bandwidth connection possible.
>Either kill it if you can or trust it (so as not to make a new enemy out of
>an entity that might have been at least neutral.) The only way you are
>likely to discover that it is unfriendly is to catch it in an unfriendly
>act. You might try to feed it false information about its circumstances to
>induce a jail break that would be unsuccesful yet reveal its true nefarious
>nature, but riding a great white shark bareback would be just as exciting,
>and less dangerous for the spectators.
That gives me an idea! How about for the first round of testing we have
the AI communicate in a "simulated" world. Setup the VT100 black-box
scenario, but instead of a human lock a turing capable chat-bot in with the
SI for a week. Hopefully turing capable chat-bot technology will be
avilable by this time, if not use the best available. Then program it so
that it is likely to ask the type of questions you want (if any specific
ones are desired). Setup the experiment so that neither the SI nor the
Chat-Bot have any information indicating that the chat bot is not a person
or that it is being closely monitored. Of course you would have monitoring
software running on the CB system that would look for anything
unusual. Everything from basic Firewall & packet analizer to checking the
CB binary files & knowledge base for tampering.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT