Re: Transcript. please? (Re: AI-Box Experiment 3)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Aug 22 2005 - 17:32:29 MDT


Eliezer S. Yudkowsky wrote:
>
> Russell, you previously wrote:
>
>> Whether unfriendly superintelligent AI in a box is safe depends on
>> your assumptions; but I claim that there are _no_ plausible
>> assumptions under which it would be _both safe and useful_.
>
> I agree.

That is, I agree with the second sentence. As for the first sentence, the
"assumptions" needed to make boxed UFSI safe appear real-world-absurd.

> Are we supposed to simulate a Friendly AI in a box? Why wouldn't you
> just let it out immediately?

To clarify why I'm asking this question, I have so far kept to a policy of
running AI-Box Experiments only with Gatekeepers who feel that AI-boxing makes
sense in the real world, which automatically provides a plausible background
for the AI being in a box. Russell may believe in his ability to keep an AI
in the box, but he also appears to advocate against AI-boxing - sensibly so! -
which would make him ineligible, and also eliminate our background story.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT