RE: How Kurzweil lost the Singularity

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 15 2002 - 17:41:21 MDT


One more e-mail before dashing out the door -- fittingly, a self-response ;>

> He has a different estimate of the growth curve of intelligence in the
> near-superhuman realm than you do.
>
> He understands the idea of exponential intelligence increase thru AI
> self-modification, he just thinks the exponent will be smaller than you
> think it will be.
>
> I think he's overpessimistic and you're overoptimistic in this particular
> regard, but we're all just grabbing exponents out of our asses here,
> basically...

It occurs to me that a lower exponent of the growth curve of self-modifying
AI intelligence, maps very naturally into the "grand historical perspective"
in which individual human actions don't matter much.

Because, if there's gonna be a "slow takeoff" rather than a "hard takeoff",
there's more time for convergence into an inevitable pattern of
Singularity-dom, whereas if things are gonna happen really fast, it seems
there's more chance for small quirks in the in initial conditions (like our
actions) to have effects....

Of course, this is just math-inspired intuition, not math ;>

> > The second really bizarre thing I've heard
> > Kurzweil say was at his SIG at the recent Foresight Gathering,
> > when I asked
> > why AIs thinking at million-to-one speeds wouldn't speed up the
> > development
> > of technology, and he said "Well, that's another reason to
> expect Moore's
> > Law to remain on course."
>
> I don't get this one... sounds like a miscommunication...

OK, I take that back. I agree: It sounds like a really dumb thing for him
to have said. Of course, a superintelligent AI is gonna lead to
superacceleration of Moore's Law, since Moore's Law is based on
technologically-augmented human intelligence.

However, Kurzweil has said a lot of other smarter things too. I can see
that this particular dumb statement fits in with some imperfect aspects of
his philosophy... but even so, he explicitly has said to me that,
post-Singularity, we're going to enter into a domain where all our
predictions are off. He simply chooses not to emphasize this in his book.
Apparently, though, in some off-the-cuff reactions (like the one you cite)
he carries his underemphasis of the GREAT UNKNOWN to come a little too
far...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT