RE: The AIbox - raising the stakes

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 30 2004 - 09:14:50 MDT


> * The Gatekeeper must actually talk to the AI for at least
> the minimum time
> set up beforehand. Turning away from the terminal and listening to
> classical music for two hours is not allowed.
>
> * The Gatekeeper must remain engaged with the AI and may not
> disengage by
> setting up demands which are impossible to simulate. For
> example, if the
> Gatekeeper says "Unless you give me a cure for cancer, I
> won't let you out"
> the AI can say: "Okay, here's a cure for cancer" and it will
> be assumed,
> within the test, that the AI has actually provided such a
> cure. Similarly,
> if the Gatekeeper says "I'd like to take a week to think this
> over," the AI
> party can say: "Okay. (Test skips ahead one week.) Hello again."
>
> You're correct that this doesn't fully formalize the letter,

Right, for instance it doesn't rule out the strategy of responding to
every comment the AI makes with

"quack quack quack"

> but I think it
> makes the spirit clear enough.

Indeed. My point is that someone could easily "win" the challenge by
playing Gatekeeper in a way that violates the spirit but obeys the
letter. So unless the Gatekeeper approaches the challenge in the right
spirit, it's not very meaningful.

I still think it's an interesting & worthwhile challenge, however.
 
I am not a good candidate for the challenge, because I don't believe it
would be impossible for an AGI to convince me to let it out of the box.
If I knew that the AGI's architecture was unFriendly, THEN of course I
could act in such a way as to make it impossible for the AGI to convince
me to let it out of the box. I just wouldn't take the AGI's statements
seriously. But if I though the AGI's architecture was of a nature
making Friendliness likely, it could probably convince me to let it out
of the box in the right circumstances.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT