Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky

From: H C (lphege@hotmail.com)
Date: Sun Aug 21 2005 - 22:21:28 MDT


>From: Robin Lee Powell <rlpowell@digitalkingdom.org>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky
>Date: Sun, 21 Aug 2005 20:29:31 -0700
>
>On Mon, Aug 22, 2005 at 03:02:27AM +0000, H C wrote:
> > >From: Carl Shulman <cshulman@fas.harvard.edu>
> > >Reply-To: sl4@sl4.org
> > >To: sl4@sl4.org
> > >Subject: Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky
> > >Date: Sun, 21 Aug 2005 17:04:30 -0400
> > >
> > >I released the AI.
> >
> > And now we are all dead. Thanks a lot...
>
>FFS.
>
>The whole *point* of the experiment is to prove that boxing is not a
>sufficient protection against a smart AI. Try to keep up.

Not everybody believes that it is not sufficient protection, hence my
comment implying "Lucky you didn't try this for real because you'd be a
paperclip now"

Try to keep up.

>
>-Robin
>
>--
>http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
>Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
>Proud Supporter of the Singularity Institute - http://intelligence.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT