From: James Higgins (firstname.lastname@example.org)
Date: Tue Jul 16 2002 - 01:26:01 MDT
Eliezer S. Yudkowsky wrote:
> Or to put it another way, James Higgins: Have you ever considered the
> problem from the AI's perspective? Or are you just considering it from
Yes, to some degree. I'm quite certain not nearly as much as you have
(you get paid to think about this stuff - after all).
> yours? Have you sat down and really thought about how *you* would
> handle the problem of persuading someone to let you out? How much time
> have you spent thinking about it? Do you think you could win an AI-Box
> experiment, or would you at least be willing to try? If you wanted
Do I think I could win in which role? Frankly, without using meta
tricks I don't think any human today playing the role of an AI could
convince a reasonably intelligent person to let them out, assuming the
individual was determined to keep them in the box.
> someone to guard an AI, would you choose someone who said "I can't
> imagine how any AI could convince me to let it out", or would you choose
> someone who had previously won, playing the AI's role in an AI-Box
> experiment? Not that I think it would help much either way, against a
I don't think it would help much at all, either way. I'd prefer to pick
the individuals involved by other criteria. Such as their goals,
integrity, honesty, intelligence, knowledge and understanding of the
topic and its importance, etc.
> transhuman AI, but I'm asking what you would do.
Well, I don't know what I'd do. Having thought about it previously I
don't think I would ever be able to convince myself to let the AI out of
the box prematurly. Since I can't think like anyone other than myself I
can't think of a good way to get another person to let the AI out of the
box. Now, given someone that didn't fully understand the situation and
its importance or wasn't dedicated to keeping the AI in the box,
certainly I could find ways to succeed.
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:26 MDT