From: Tennessee Leeuwenburg (firstname.lastname@example.org)
Date: Tue Aug 29 2006 - 18:01:01 MDT
John K Clark wrote:
> "Ricardo Barreira" <email@example.com>
>> How do you even know the AI will want any control at all?
> If the AI exists it must prefer existence to non existence, and after
> it is a short step, a very short step, to what Nietzsche called "the
> will to
>> Tennessee's point is that a powerful AI doesn't strictly imply a
> Yes that was his point, a point I believe is ridiculous.
I don't see why. Do you think the existence of humans strictly implies a
singularity? The most intelligent humans are more than "twice" as
intelligent as the average (using crude test measures), yet they haven't
sparked off a singularity or gone off to make eugenic love to eachother.
How intelligent does an intelligence need to be before a singularity is
>> I challenge you to prove otherwise
> Prove? This isn't high school geometry, I can't prove anything about a
> intelligence far far greater than my own; about the only thing I can say
> about it is that it's a good bet it won't act like a fool. Eliezer thinks
> this mega genius will behave like a jackass and place our well being
> it's own. I think that is unlikely.
I have advanced that position before, it wasn't received well. It
appears that SL4 seems to regard foolishness and intelligence to be
unrelated, or at least not necessarily related.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:02 MDT