From: Heartland (email@example.com)
Date: Tue Aug 29 2006 - 15:13:55 MDT
> "Ricardo Barreira" <firstname.lastname@example.org>
>> How do you even know the AI will want any control at all?
John K Clark:
> If the AI exists it must prefer existence to non existence, and after that
> it is a short step, a very short step, to what Nietzsche called "the will to
Not really. You cannot say anything about what AI would do if it's much smarter
than you and you have no reason to attribute human urges to an artificial mind that
won't be programmed to have those urges. The length of the step between existence
and will to power is unknown. Claiming otherwise is just guessing.
John K Clark:
> Prove? This isn't high school geometry, I can't prove anything about a
> intelligence far far greater than my own; about the only thing I can say
> about it is that it's a good bet it won't act like a fool. Eliezer thinks
> this mega genius will behave like a jackass and place our well being above
> it's own. I think that is unlikely.
You've just said, IMO, correctly, that you can't prove *anything* about
intelligence vastly greater than your own, and yet you offer predictions about AI's
likely and unlikely behavior.
So let's start with your opening assumption. What is the proof for AI not wanting
to shut itself down?
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:01:01 MDT