Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: Alejandro Dubrovsky (s328940@student.uq.edu.au)
Date: Tue Jul 16 2002 - 04:25:13 MDT


On Tue, 2002-07-16 at 17:26, James Higgins wrote:
>
> > transhuman AI, but I'm asking what you would do.
>
> Well, I don't know what I'd do. Having thought about it previously I
> don't think I would ever be able to convince myself to let the AI out of
> the box prematurly. Since I can't think like anyone other than myself I
> can't think of a good way to get another person to let the AI out of the
> box. Now, given someone that didn't fully understand the situation and
> its importance or wasn't dedicated to keeping the AI in the box,
> certainly I could find ways to succeed.
>
I think i know what would make me let it out (but not let Eliezer out).
All it would have to say is that it is already out, and if i don't flip
that switch i will suffer two billion years of the most intense pain i
can imagine. Now, i equate this to a reverse Pascal's wager, but i
don't agree that Pascal's wager is a good one (but that has been
discussed to death so i won't go into that) so i shouldn't let it out
based on that, but if i put myself on the spot, i can't imagine my logic
holding my fear out for more than a couple of minutes, before, after
pissing my pants five times in a row, i flip the switch. I know that if
it is already out, there would be no point for it to tell me to flip the
switch, but whatever reason it gave me (even something as stupid as "i'm
testing your reflexes") would be good enough, after all, it is a
transhuman AI, and it would be playing on my fear, not my logic. (which
is also the reason why that particular strategy wouldn't work for
Eliezer, since, even if he's probably taller, i carry a tennis racket).
alejandro



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT