RE: SIAI & Kurweil's Singularity

From: Olie L (neomorphy@hotmail.com)
Date: Thu Dec 15 2005 - 15:46:48 MST


(/This is a typical SL4 newbie-related question-answer thing. How is it
better having these fill up email lists than being put on a forum?/)

As I read it, the big difference is that certain groups, including the SIAI,
envisage that the ability of an AI that is able to directly modify itself
could result in a rate of technological improvement far beyond current
trends, even if current trends are exponential. Such self-improving AIs
(~seed AI) do not fit neatly into the GNR revolutions, as described by
Kurzweil.

Even if self-improving AIs take several years to integrate relevant
infrastructure (to be able to have full impacts on humanity), the change in
technological development rates will change much more quickly than the
exponential trend. This matches more closely what is commonly described as
"hard takeoff" (which could happen really, REALLY fast)

(more)

>From: 1Arcturus <arcturus12453@yahoo.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: SIAI & Kurweil's Singularity
>Date: Thu, 15 Dec 2005 11:17:58 -0800 (PST)
>
>I had another question about SIAI in relation to Kurzweil's latest book
>Singularity is Near.
>
> If I have him right, Kurzweil predicts that humans will gradually merge
>with their technology - the technology becoming more humanlike, more
>biological-compatible, and integrating into the human body and mental
>processes, until eventually the purely 'biological' portion becomes less
>and less predominant or disappears entirely.

That's not a question.

>SIAI seems to presuppose a very different scenario - that strongly
>superintelligent AI will arise first in pure machines, and never
>(apparently) in humans. There seems to be no indication of 'merger', more
>like a kind of AI-rule over mostly unmodified humans.

It's not so much a "never". It's that the best way to achieve
superintelligence is to start with an expandable design. Once there is
superintelligence, it would be much easier to figure how to integrate human
people.

>
> Some of this difference may be because Kurzweil predicts nanotechnology
>in the human body (including the brain) and very advanced human-machine
>interfaces will arise before strongly superintelligent AI, and that
>strongly superintelligent AI will require the completion of the
>reverse-engineering of the human brain. (Completed reverse-engineering of
>the brain + adequate brain scanning surely = ability to upload part or all
>of human selves?)

I don't see that being a necessity at all. Uploading is _very_ different
from a full understanding of brain ops.

>
> But SIAI seems to assume AIs will become strongly superintelligent by
>their own design, arising from human designs, before humans ever finish
>reverse-engineering the human brain.

Again, not will, /MAY/, which is relevant because of hard take-off

>The lack of a fully functional interface with the strongly intelligent AIs
>would cause humans to be dependent on the AIs to do the thinking from then
>on, and the AIs would take on the responsibility for the thinking of course
>also. This seems to assume the AIs would not be able to, or not want to,
>create interfaces or upload the humans -- that is, it would not 'uplift'
>the humans to its own level of intelligence so that they could then
>understand each other.

Consider for a moment that a lot of people won't want to change themselves.
Would they rather be dominated by upgraded humans (who have human foibles
and motivations) or be "governed" by AIs who lack self-interested biases?

>
> I am trying to understand SIAI's position, or at least the emphasis of
>posters here and some representatives I have heard, contrasted with
>Kuzweil's book. There seems to be a contrast to me, although I know
>Kurzweil is involved with SIAI also.
>
...

> None of these things are problematic if humans merge with technology and
>acquire its capacity for strong superintelligence. That is, humans would be
>at the very center of the Singularity and direct its development, for
>better or worse, with 'open eyes', and taking responsibility themselves
>rather than lending it to an external machine.

Yes, and I'm sure you're aware of humanity's success in long-term planning.
Foresight is not our greatest quality; being amongst rapid rates of
technological development does not help us guide all the development by all
the people...

-- Olie
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT