From: James MacAulay (firstname.lastname@example.org)
Date: Thu Aug 31 2006 - 11:50:24 MDT
John K Clark:
> If the AI exists it must prefer existence to non existence, and
> after that
> it is a short step, a very short step, to what Nietzsche called
> "the will to
Perhaps, but is a Singularity really the road to power? Maybe
Nietzsche was more subtle than I give him credit for, but if I wanted
power then I'd prefer my world to be *more* predictable so that I
could better determine how to have control over it.
On 29-Aug-06, at 2:48 AM | Aug 29, Tennessee Leeuwenburg wrote:
>> A very powerful AI is the very definition of the Singularity.
> Not on my understanding of it. The Singularity implies a continual
> improvement. A very powerful AI might choose not to continually
> improve itself. A very powerful AI might not be beyond our
On 29-Aug-06, at 5:19 AM | Aug 29, John K Clark wrote:
> "Josť Raeiro" <email@example.com>
>> A very powerful AI may continue it's growth exponentially until
>> point, which is beyond our current capability of understanding,
> OK, sounds reasonable.
>> where it concludes that it's best to stop
> Huh? How do you conclude that this un-knowable AI would conclude
> that it
> would be best if it stopped improving itself? Is it common for
> entities to decide that they don't want more control of the
> universe? I think not.
The Singularity is about radical and unpredictable change, which may
or may not constitute 'improvement' (and perhaps not necessarily even
'growth') especially from the viewpoint of the beings whose lives are
about to be transformed. A very powerful AI might decide to try and
stave off the Singularity for the same reasons that motivate some
humans today: they think there is a good chance that this radical
change will *not* be an improvement, as they understand it.
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:22 MDT