From: Brian Atkins (firstname.lastname@example.org)
Date: Fri Jun 22 2001 - 20:27:01 MDT
If you have an AI that has no prior experience talking to humans, then
you could set it up to talk to a fake human that has a known exploit
built in ("honeypot"). Then you watch and see if the AI takes advantage
of the hole when it spots it, or whether it plays nice.
If the AI has a lot of previous experience with humans, then your fake
one won't be able to fool it. Not quite sure what you could do in that
case... but if it has already been with humans while "growing up" then
you shouldn't even find yourself in this situation.
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT