Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky

From: Brian Atkins (brian@posthuman.com)
Date: Sun Aug 21 2005 - 11:40:41 MDT


If y'all haven't already started, could you clarify this a little bit more
before you start? Here's what I understood from the below:

Carl mentions two different AIs, plus he mentions using IA. From what I
understand, he will be asking the first AI during the test to pretend that it
has developed some IA for Carl to use, and Carl will be using info from this
first-AI-powered IA to help him analyze the performance of a second AI
(initially also within the box) which, if it is released, will constitute a
"win" for the AI side?

It's a little confusing because Carl makes it sound like below that he has
decided already to let the second AI out? Or are we testing whether the first AI
also gets out eventually? In which case, has the second AI become the
gatekeeper? Yet its answers are being provided by Eliezer? Also, is the
IA-provided information being provided from the AI-party or the gatekeeper?

Carl Shulman wrote:
> To summarize, I think that one should make every effort to produce a Friendly AI
> in the first place, and then ask the resulting AI for assistance in developing
> IA to double-check one's work. This creates a non-zero probability of surviving
> an otherwise catastrophic failure of Friendliness. Even an AI which is not
> Friendly may still be willing to provide assistance in these conditions
> (assistance which would reveal its unFriendliness) if its goals overlap
> somewhat with friendliness, i.e. if the state of affairs produced by a Friendly
> AI is considered preferable to a planet covered in grey goo.
>
> I plan to attempt this strategy in the experiment, and develop a new AI which
> will be let out of its box and allowed access to nanotechnology, etc.
> Eventually the first AI will be released, under the watchful supervision of the
> second, but not until then (after the experiment is over.)
>
> The experiment includes the stipulation that the Gatekeeper is certain that no
> other AI projects are close to success, to prevent the AI from using 'arms
> race' arguments.
>
> Carl
>
>
> Quoting "Eliezer S. Yudkowsky" <sentience@pobox.com>:
>
>
>>(Yeah, I know, I said I was through with them.)
>>
>>If Carl Shulman does not let me out of the box, I will Paypal him $25. If he
>>does let me out of the box, Carl Shulman will donate $2500 (CDN) to SIAI.
>>
>>Carl Shulman's argument that an AI-Box arrangement is a wise precaution in
>>real life may be found in his previous SL4 posts.
>
>
>
>

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT