Re: META: Dangers of Superintelligence

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Sun Aug 29 2004 - 17:24:49 MDT


On Sun, Aug 29, 2004 at 05:05:45PM -0400, Eliezer Yudkowsky wrote:
> Eliezer Yudkowsky wrote:
>
> >It yet continues to amaze me that, even after reading the results
> >of the last two tests, people are still so confident. Come on,
> >isn't there anyone besides me who wants to give this a try?
> >Think of the damage to my mystique if someone else succeeded!
>
> Okay, two people got this wrong. So to clarify: I'm no longer
> accepting challenges for AI-Box experiments unless someone offers
> to pay a serious sum of money. Otherwise it's not worth the clock
> ticks.

I still want to know how much money I have to give you to see the
logs of the previous sessions. You were going to get back to me on
that after you talked to the other participants. I'll happily sign
a non-disclosure first.

I actually probably *could* offer you enough money to get you to run
an AI box challenge against me, but unfortunately I'm not going to
because I have no reason whatsoever to believe that I am stupid
enough to be unconvincable (and yes, being unable to be convinced of
something is a sign of *stupidity*, not intelligence).

> However, if anyone else out there is willing to accept challenges,
> please go ahead and take their money. C'mon, show that Eliezer
> isn't the only one who can do it!

I honestly can't think of how one would accomplish this sort of
thing, which is why I want to see the transcripts. Now, *after* I
see the transcripts, maybe.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT