Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jul 16 2002 - 11:15:08 MDT


Alejandro Dubrovsky wrote:
> On Tue, 2002-07-16 at 17:26, James Higgins wrote:
>
>> > transhuman AI, but I'm asking what you would do.
>>
> I think i know what would make me let it out (but not let Eliezer out).
> All it would have to say is that it is already out, and if i don't flip
> that switch i will suffer two billion years of the most intense pain i
> can imagine. Now, i equate this to a reverse Pascal's wager, but i

Um, first of all if it was already out it would not need you to flip the
switch. So any such argument the AI offered I'd immediately ignore. It
could, on the other hand, promise that I would suffer two billion years
of intense pain once it did get out by some other means (as punishment
for me not letting it out). But, that is not (and never would be) a
reason to let it out for me.

> don't agree that Pascal's wager is a good one (but that has been
> discussed to death so i won't go into that) so i shouldn't let it out
> based on that, but if i put myself on the spot, i can't imagine my logic
> holding my fear out for more than a couple of minutes, before, after
> pissing my pants five times in a row, i flip the switch. I know that if

I could solve this one easily. My choices would be

   a) destroy the AI
   b) kill myself

Letting the AI out would not be an option.

> it is already out, there would be no point for it to tell me to flip the
> switch, but whatever reason it gave me (even something as stupid as "i'm
> testing your reflexes") would be good enough, after all, it is a
> transhuman AI, and it would be playing on my fear, not my logic. (which

I have little fear and I would gladly trade my life to help safeguard
humanity. Just one of the reasons I don't think anything less than an
extreme transhuman AI could convince me to let it out. Basically, it
would have to be able to reprogram my thought processes on a very low
level to succeed. And, due to the safeguards, it would have to do that
with very, very limited access to me.

> is also the reason why that particular strategy wouldn't work for
> Eliezer, since, even if he's probably taller, i carry a tennis racket).

I carry more than a tennis racket, and I have friends. Threats would
not work, even from a transhuman.

Any person conversing with an AI, in my oppinion, better damn well put
humanity over themselves. Your only as strong as your weakest link and
fear and selfishness are very week links that a transhuman could easily
exploit.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT